Test Report: Docker_Linux_containerd_arm64 19688

                    
                      911e4c99bab82008a0d80e5fa9ba6656b1cfd206:2024-09-23:36337
                    
                

Test fail (8/327)

Order failed test Duration
29 TestAddons/serial/Volcano 363.32
31 TestAddons/serial/GCPAuth/Namespaces 45.28
33 TestAddons/parallel/Registry 14.85
34 TestAddons/parallel/Ingress 483.5
38 TestAddons/parallel/CSI 374.17
39 TestAddons/parallel/Headlamp 483.79
41 TestAddons/parallel/LocalPath 0.46
301 TestStartStop/group/old-k8s-version/serial/SecondStart 379.77
x
+
TestAddons/serial/Volcano (363.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 55.493941ms
addons_test.go:835: volcano-scheduler stabilized in 55.615835ms
addons_test.go:843: volcano-admission stabilized in 56.953741ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-79dc4b78bb-2vx88" [74b4a12a-ef6c-40d9-a5f6-e73012730d8a] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-scheduler" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:857: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:857: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
addons_test.go:857: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-23 11:48:52.760308473 +0000 UTC m=+799.270355820
addons_test.go:857: (dbg) Run:  kubectl --context addons-348379 describe po volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system
addons_test.go:857: (dbg) kubectl --context addons-348379 describe po volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system:
Name:                 volcano-scheduler-79dc4b78bb-2vx88
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 addons-348379/192.168.49.2
Start Time:           Mon, 23 Sep 2024 11:36:49 +0000
Labels:               app=volcano-scheduler
pod-template-hash=79dc4b78bb
Annotations:          <none>
Status:               Pending
IP:                   10.244.0.19
IPs:
IP:           10.244.0.19
Controlled By:  ReplicaSet/volcano-scheduler-79dc4b78bb
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhr2p (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:          HostPath (bare host directory volume)
Path:          /tmp/klog-socks
HostPathType:  
kube-api-access-hhr2p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                  Age                  From               Message
----     ------                  ----                 ----               -------
Normal   Scheduled               12m                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-79dc4b78bb-2vx88 to addons-348379
Warning  FailedCreatePodSandBox  12m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "10142cdcc6a633dec5ba8079e810795d3d461c87b8b526b2e68a3f0d683a7292": failed to find network info for sandbox "10142cdcc6a633dec5ba8079e810795d3d461c87b8b526b2e68a3f0d683a7292"
Normal   Pulling                 9m53s (x4 over 11m)  kubelet            Pulling image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
Warning  Failed                  9m53s (x4 over 11m)  kubelet            Failed to pull image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": failed to pull and unpack image "docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": failed to resolve reference "docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized
Warning  Failed                  9m53s (x4 over 11m)  kubelet            Error: ErrImagePull
Warning  Failed                  9m39s (x6 over 11m)  kubelet            Error: ImagePullBackOff
Normal   BackOff                 2m2s (x38 over 11m)  kubelet            Back-off pulling image "docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
addons_test.go:857: (dbg) Run:  kubectl --context addons-348379 logs volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system
addons_test.go:857: (dbg) Non-zero exit: kubectl --context addons-348379 logs volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system: exit status 1 (114.249635ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-79dc4b78bb-2vx88" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:857: kubectl --context addons-348379 logs volcano-scheduler-79dc4b78bb-2vx88 -n volcano-system: exit status 1
addons_test.go:858: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-348379
helpers_test.go:235: (dbg) docker inspect addons-348379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751",
	        "Created": "2024-09-23T11:36:14.086186806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2904413,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:36:14.226479889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hostname",
	        "HostsPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hosts",
	        "LogPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751-json.log",
	        "Name": "/addons-348379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-348379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-348379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-348379",
	                "Source": "/var/lib/docker/volumes/addons-348379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-348379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-348379",
	                "name.minikube.sigs.k8s.io": "addons-348379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0be556c33c7ef40a6e70f0f396b5b7933e0b3ad3ca535a0d80fe041626578e74",
	            "SandboxKey": "/var/run/docker/netns/0be556c33c7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-348379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d5b669f6f456e67aee85da3145f607fecadf2af36a162932dd5e9bc9ffffee31",
	                    "EndpointID": "406dde2b8517c79d36b643a1ab3c5c13c7554eb0ba705342617104596090b341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-348379",
	                        "1973e07b6a14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-348379 -n addons-348379
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 logs -n 25: (1.528062685s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-611017              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -o=json --download-only              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-423730              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | download-docker-021793               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-021793            | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | binary-mirror-046209                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34157               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-046209              | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| addons  | enable dashboard -p                  | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| start   | -p addons-348379 --wait=true         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:49.628900 2903914 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:49.629020 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629030 2903914 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:49.629036 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629290 2903914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 11:35:49.629765 2903914 out.go:352] Setting JSON to false
	I0923 11:35:49.630698 2903914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155897,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:49.630769 2903914 start.go:139] virtualization:  
	I0923 11:35:49.632856 2903914 out.go:177] * [addons-348379] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:35:49.634664 2903914 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:35:49.634735 2903914 notify.go:220] Checking for updates...
	I0923 11:35:49.637703 2903914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:49.639443 2903914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:49.640961 2903914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:49.642654 2903914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 11:35:49.644154 2903914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:35:49.646005 2903914 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:49.674816 2903914 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:49.674959 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.740942 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.731429543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.741058 2903914 docker.go:318] overlay module found
	I0923 11:35:49.742922 2903914 out.go:177] * Using the docker driver based on user configuration
	I0923 11:35:49.744404 2903914 start.go:297] selected driver: docker
	I0923 11:35:49.744427 2903914 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:49.744443 2903914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:35:49.745066 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.807015 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.798069613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.807243 2903914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:49.807490 2903914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:35:49.810585 2903914 out.go:177] * Using Docker driver with root privileges
	I0923 11:35:49.812100 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:35:49.812180 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:49.812195 2903914 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:49.812295 2903914 start.go:340] cluster config:
	{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:49.814088 2903914 out.go:177] * Starting "addons-348379" primary control-plane node in "addons-348379" cluster
	I0923 11:35:49.815629 2903914 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:49.817311 2903914 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:49.818975 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:49.819031 2903914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:49.819044 2903914 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:49.819072 2903914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:49.819129 2903914 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 11:35:49.819140 2903914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 11:35:49.819629 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:35:49.819663 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json: {Name:mk57bf6c9d1a024b95a9182333fb0e843fbdc049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:35:49.834226 2903914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:49.834349 2903914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:49.834370 2903914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:35:49.834376 2903914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:35:49.834383 2903914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:35:49.834388 2903914 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 11:36:07.477009 2903914 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 11:36:07.477055 2903914 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:36:07.477102 2903914 start.go:360] acquireMachinesLock for addons-348379: {Name:mk0afc734c4276635047574670b52ff1624a597d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:07.477241 2903914 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "addons-348379"
	I0923 11:36:07.477273 2903914 start.go:93] Provisioning new machine with config: &{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:07.477361 2903914 start.go:125] createHost starting for "" (driver="docker")
	I0923 11:36:07.479499 2903914 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 11:36:07.479768 2903914 start.go:159] libmachine.API.Create for "addons-348379" (driver="docker")
	I0923 11:36:07.479806 2903914 client.go:168] LocalClient.Create starting
	I0923 11:36:07.479934 2903914 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem
	I0923 11:36:07.656758 2903914 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem
	I0923 11:36:07.895005 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 11:36:07.911037 2903914 cli_runner.go:211] docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 11:36:07.911143 2903914 network_create.go:284] running [docker network inspect addons-348379] to gather additional debugging logs...
	I0923 11:36:07.911165 2903914 cli_runner.go:164] Run: docker network inspect addons-348379
	W0923 11:36:07.926743 2903914 cli_runner.go:211] docker network inspect addons-348379 returned with exit code 1
	I0923 11:36:07.926792 2903914 network_create.go:287] error running [docker network inspect addons-348379]: docker network inspect addons-348379: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-348379 not found
	I0923 11:36:07.926806 2903914 network_create.go:289] output of [docker network inspect addons-348379]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-348379 not found
	
	** /stderr **
	I0923 11:36:07.926904 2903914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:07.941390 2903914 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eb90}
	I0923 11:36:07.941437 2903914 network_create.go:124] attempt to create docker network addons-348379 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 11:36:07.941499 2903914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348379 addons-348379
	I0923 11:36:08.013253 2903914 network_create.go:108] docker network addons-348379 192.168.49.0/24 created
	I0923 11:36:08.013292 2903914 kic.go:121] calculated static IP "192.168.49.2" for the "addons-348379" container
	I0923 11:36:08.013374 2903914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 11:36:08.030080 2903914 cli_runner.go:164] Run: docker volume create addons-348379 --label name.minikube.sigs.k8s.io=addons-348379 --label created_by.minikube.sigs.k8s.io=true
	I0923 11:36:08.048023 2903914 oci.go:103] Successfully created a docker volume addons-348379
	I0923 11:36:08.048128 2903914 cli_runner.go:164] Run: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 11:36:10.056258 2903914 cli_runner.go:217] Completed: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.008084592s)
	I0923 11:36:10.056295 2903914 oci.go:107] Successfully prepared a docker volume addons-348379
	I0923 11:36:10.056323 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:10.056345 2903914 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 11:36:10.056440 2903914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 11:36:14.019670 2903914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.963182399s)
	I0923 11:36:14.019706 2903914 kic.go:203] duration metric: took 3.963357873s to extract preloaded images to volume ...
	W0923 11:36:14.019879 2903914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 11:36:14.020008 2903914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 11:36:14.071499 2903914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348379 --name addons-348379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348379 --network addons-348379 --ip 192.168.49.2 --volume addons-348379:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 11:36:14.406562 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Running}}
	I0923 11:36:14.430276 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.456577 2903914 cli_runner.go:164] Run: docker exec addons-348379 stat /var/lib/dpkg/alternatives/iptables
	I0923 11:36:14.524343 2903914 oci.go:144] the created container "addons-348379" has a running status.
	I0923 11:36:14.524373 2903914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa...
	I0923 11:36:14.817157 2903914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 11:36:14.839902 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.872963 2903914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 11:36:14.872990 2903914 kic_runner.go:114] Args: [docker exec --privileged addons-348379 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 11:36:14.951457 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.976890 2903914 machine.go:93] provisionDockerMachine start ...
	I0923 11:36:14.977005 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.007610 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.007901 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.007913 2903914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:36:15.203560 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.203583 2903914 ubuntu.go:169] provisioning hostname "addons-348379"
	I0923 11:36:15.203659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.229660 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.229941 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.229961 2903914 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-348379 && echo "addons-348379" | sudo tee /etc/hostname
	I0923 11:36:15.387302 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.387387 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.409587 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.409829 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.409846 2903914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-348379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348379/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-348379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:36:15.552128 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:36:15.552218 2903914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 11:36:15.552276 2903914 ubuntu.go:177] setting up certificates
	I0923 11:36:15.552305 2903914 provision.go:84] configureAuth start
	I0923 11:36:15.552432 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:15.571035 2903914 provision.go:143] copyHostCerts
	I0923 11:36:15.571118 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 11:36:15.571374 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 11:36:15.571463 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 11:36:15.571520 2903914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.addons-348379 san=[127.0.0.1 192.168.49.2 addons-348379 localhost minikube]
	I0923 11:36:15.936111 2903914 provision.go:177] copyRemoteCerts
	I0923 11:36:15.936188 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:36:15.936230 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.954080 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.048521 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:36:16.073032 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:36:16.096585 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:36:16.120809 2903914 provision.go:87] duration metric: took 568.476502ms to configureAuth
	I0923 11:36:16.120878 2903914 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:36:16.121066 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:16.121075 2903914 machine.go:96] duration metric: took 1.144158288s to provisionDockerMachine
	I0923 11:36:16.121082 2903914 client.go:171] duration metric: took 8.641266117s to LocalClient.Create
	I0923 11:36:16.121105 2903914 start.go:167] duration metric: took 8.641338888s to libmachine.API.Create "addons-348379"
	I0923 11:36:16.121117 2903914 start.go:293] postStartSetup for "addons-348379" (driver="docker")
	I0923 11:36:16.121127 2903914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:36:16.121180 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:36:16.121219 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.140164 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.237274 2903914 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:36:16.240688 2903914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:36:16.240726 2903914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:36:16.240751 2903914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:36:16.240759 2903914 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:36:16.240772 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 11:36:16.240845 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 11:36:16.240872 2903914 start.go:296] duration metric: took 119.748923ms for postStartSetup
	I0923 11:36:16.241197 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.257321 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:36:16.257608 2903914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:36:16.257659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.273475 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.364420 2903914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:36:16.369417 2903914 start.go:128] duration metric: took 8.892040374s to createHost
	I0923 11:36:16.369444 2903914 start.go:83] releasing machines lock for "addons-348379", held for 8.892189913s
	I0923 11:36:16.369525 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.386496 2903914 ssh_runner.go:195] Run: cat /version.json
	I0923 11:36:16.386558 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.386844 2903914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:36:16.386924 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.402767 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.407504 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.620856 2903914 ssh_runner.go:195] Run: systemctl --version
	I0923 11:36:16.625351 2903914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:36:16.629494 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 11:36:16.656196 2903914 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 11:36:16.656273 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:36:16.685634 2903914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 11:36:16.685658 2903914 start.go:495] detecting cgroup driver to use...
	I0923 11:36:16.685694 2903914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:36:16.685752 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:36:16.698438 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:36:16.709958 2903914 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:36:16.710048 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:36:16.723912 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:36:16.738695 2903914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:36:16.833978 2903914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:36:16.926860 2903914 docker.go:233] disabling docker service ...
	I0923 11:36:16.926964 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:36:16.947106 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:36:16.959548 2903914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:36:17.053558 2903914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:36:17.135034 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:36:17.146613 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:36:17.163902 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:36:17.174292 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:36:17.185041 2903914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:36:17.185112 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:36:17.195986 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.206180 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:36:17.217391 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.228075 2903914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:36:17.237476 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:36:17.247362 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:36:17.257646 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:36:17.267821 2903914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:36:17.276866 2903914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:36:17.286512 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.359977 2903914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:36:17.486930 2903914 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 11:36:17.487093 2903914 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 11:36:17.490604 2903914 start.go:563] Will wait 60s for crictl version
	I0923 11:36:17.490709 2903914 ssh_runner.go:195] Run: which crictl
	I0923 11:36:17.494017 2903914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:36:17.529914 2903914 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 11:36:17.530001 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.553062 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.581066 2903914 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 11:36:17.583092 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:17.598970 2903914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 11:36:17.602709 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.616517 2903914 kubeadm.go:883] updating cluster {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:36:17.616637 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:17.616705 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.653561 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.653588 2903914 containerd.go:534] Images already preloaded, skipping extraction
	I0923 11:36:17.653654 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.689043 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.689069 2903914 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:36:17.689077 2903914 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 11:36:17.689170 2903914 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-348379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:36:17.689240 2903914 ssh_runner.go:195] Run: sudo crictl info
	I0923 11:36:17.725180 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:17.725207 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:17.725219 2903914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:36:17.725244 2903914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348379 NodeName:addons-348379 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:36:17.725401 2903914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-348379"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:36:17.725481 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:36:17.734617 2903914 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:36:17.734696 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:36:17.743298 2903914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 11:36:17.761798 2903914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:36:17.779190 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 11:36:17.797282 2903914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:36:17.801431 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.813470 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.904297 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:17.918201 2903914 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379 for IP: 192.168.49.2
	I0923 11:36:17.918278 2903914 certs.go:194] generating shared ca certs ...
	I0923 11:36:17.918311 2903914 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:17.918478 2903914 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 11:36:18.402482 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt ...
	I0923 11:36:18.402521 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt: {Name:mka24ad8ce2563bd38493ad3048e3b202e9928cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.403346 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key ...
	I0923 11:36:18.403367 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key: {Name:mke301cd867e18ebea9d875f8c02fb489d6a0a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.404035 2903914 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 11:36:18.591619 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt ...
	I0923 11:36:18.591652 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt: {Name:mkf29e9cf8d545d0d33d0ce8b9548c24a316f1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.591849 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key ...
	I0923 11:36:18.591862 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key: {Name:mk38a5ba119b442a98d5a1991cd20b7dc11fb378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.592490 2903914 certs.go:256] generating profile certs ...
	I0923 11:36:18.592562 2903914 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key
	I0923 11:36:18.592583 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt with IP's: []
	I0923 11:36:18.906095 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt ...
	I0923 11:36:18.906134 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: {Name:mk33102a3556c59cf025437aacb3628bfa41ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906340 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key ...
	I0923 11:36:18.906354 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key: {Name:mk1894a7120f896161f07a459fec6eb4fe11e236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906997 2903914 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb
	I0923 11:36:18.907023 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 11:36:19.176533 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb ...
	I0923 11:36:19.176571 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb: {Name:mkf3902710f18b86666bddc46eb9d246a2fd9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177433 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb ...
	I0923 11:36:19.177458 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb: {Name:mkcc729dc3699800a37a33c607924c19bb2a2d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177602 2903914 certs.go:381] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt
	I0923 11:36:19.177687 2903914 certs.go:385] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key
	I0923 11:36:19.177743 2903914 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key
	I0923 11:36:19.177760 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt with IP's: []
	I0923 11:36:19.407564 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt ...
	I0923 11:36:19.407592 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt: {Name:mk60cabd91332996a9c3d4f42fab2e735667c2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408332 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key ...
	I0923 11:36:19.408353 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key: {Name:mk59bc62a8c0559971fc4c2dcb7a472d97d949c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408565 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:36:19.408610 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:36:19.408642 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:36:19.408675 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 11:36:19.409266 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:36:19.438812 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:36:19.463399 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:36:19.487765 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:36:19.512554 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:36:19.537552 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:36:19.562531 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:36:19.587389 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:36:19.612515 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:36:19.638755 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:36:19.657011 2903914 ssh_runner.go:195] Run: openssl version
	I0923 11:36:19.662616 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:36:19.672363 2903914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675882 2903914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675950 2903914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.682948 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:36:19.692316 2903914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:36:19.695950 2903914 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:36:19.695999 2903914 kubeadm.go:392] StartCluster: {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:19.696080 2903914 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 11:36:19.696143 2903914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:36:19.737759 2903914 cri.go:89] found id: ""
	I0923 11:36:19.737855 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:36:19.746943 2903914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:36:19.756075 2903914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 11:36:19.756179 2903914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:36:19.767199 2903914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:36:19.767221 2903914 kubeadm.go:157] found existing configuration files:
	
	I0923 11:36:19.767362 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:36:19.776658 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:36:19.776748 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:36:19.785348 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:36:19.794057 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:36:19.794157 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:36:19.803152 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.812476 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:36:19.812574 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.821847 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:36:19.830557 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:36:19.830648 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:36:19.839343 2903914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 11:36:19.882483 2903914 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:36:19.882775 2903914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:36:19.901384 2903914 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 11:36:19.901507 2903914 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 11:36:19.901565 2903914 kubeadm.go:310] OS: Linux
	I0923 11:36:19.901637 2903914 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 11:36:19.901714 2903914 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 11:36:19.901776 2903914 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 11:36:19.901867 2903914 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 11:36:19.901970 2903914 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 11:36:19.902052 2903914 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 11:36:19.902117 2903914 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 11:36:19.902212 2903914 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 11:36:19.902294 2903914 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 11:36:19.967219 2903914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:36:19.967380 2903914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:36:19.967473 2903914 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:36:19.973209 2903914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:36:19.975734 2903914 out.go:235]   - Generating certificates and keys ...
	I0923 11:36:19.975831 2903914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:36:19.975904 2903914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:36:20.179743 2903914 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:36:21.047813 2903914 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:36:21.298950 2903914 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:36:22.135071 2903914 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:36:22.608576 2903914 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:36:22.609029 2903914 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:22.891026 2903914 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:36:22.891409 2903914 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:23.205606 2903914 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:36:23.489426 2903914 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:36:23.714238 2903914 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:36:23.714637 2903914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:36:23.917484 2903914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:36:24.438330 2903914 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:36:24.759712 2903914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:36:25.033943 2903914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:36:25.695483 2903914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:36:25.696139 2903914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:36:25.699152 2903914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:36:25.701227 2903914 out.go:235]   - Booting up control plane ...
	I0923 11:36:25.701329 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:36:25.702980 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:36:25.704104 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:36:25.714874 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:36:25.721041 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:36:25.721278 2903914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:36:25.822291 2903914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:36:25.822414 2903914 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:36:26.325513 2903914 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.540451ms
	I0923 11:36:26.325611 2903914 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:36:32.327784 2903914 kubeadm.go:310] [api-check] The API server is healthy after 6.002254671s
	I0923 11:36:32.349741 2903914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:36:32.363473 2903914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:36:32.390246 2903914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:36:32.390456 2903914 kubeadm.go:310] [mark-control-plane] Marking the node addons-348379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:36:32.404726 2903914 kubeadm.go:310] [bootstrap-token] Using token: 9jvvlf.nkkd2cu2r67rq0id
	I0923 11:36:32.407710 2903914 out.go:235]   - Configuring RBAC rules ...
	I0923 11:36:32.407927 2903914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:36:32.415924 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:36:32.424675 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:36:32.430654 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:36:32.435150 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:36:32.439236 2903914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:36:32.750475 2903914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:36:33.169716 2903914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:36:33.734975 2903914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:36:33.736330 2903914 kubeadm.go:310] 
	I0923 11:36:33.736408 2903914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:36:33.736419 2903914 kubeadm.go:310] 
	I0923 11:36:33.736495 2903914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:36:33.736505 2903914 kubeadm.go:310] 
	I0923 11:36:33.736531 2903914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:36:33.736593 2903914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:36:33.736647 2903914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:36:33.736656 2903914 kubeadm.go:310] 
	I0923 11:36:33.736710 2903914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:36:33.736719 2903914 kubeadm.go:310] 
	I0923 11:36:33.736766 2903914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:36:33.736774 2903914 kubeadm.go:310] 
	I0923 11:36:33.736827 2903914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:36:33.736907 2903914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:36:33.736980 2903914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:36:33.736989 2903914 kubeadm.go:310] 
	I0923 11:36:33.737074 2903914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:36:33.737166 2903914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:36:33.737175 2903914 kubeadm.go:310] 
	I0923 11:36:33.737258 2903914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737363 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 \
	I0923 11:36:33.737389 2903914 kubeadm.go:310] 	--control-plane 
	I0923 11:36:33.737397 2903914 kubeadm.go:310] 
	I0923 11:36:33.737482 2903914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:36:33.737497 2903914 kubeadm.go:310] 
	I0923 11:36:33.737577 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737677 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 
	I0923 11:36:33.741014 2903914 kubeadm.go:310] W0923 11:36:19.879102    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741352 2903914 kubeadm.go:310] W0923 11:36:19.880019    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741583 2903914 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 11:36:33.741697 2903914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:36:33.741739 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:33.741754 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:33.744827 2903914 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 11:36:33.747515 2903914 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 11:36:33.751591 2903914 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 11:36:33.751612 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 11:36:33.770489 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 11:36:34.059421 2903914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:36:34.059507 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.059553 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-348379 minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=addons-348379 minikube.k8s.io/primary=true
	I0923 11:36:34.223427 2903914 ops.go:34] apiserver oom_adj: -16
	I0923 11:36:34.223613 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.724103 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.223668 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.724171 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.224392 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.724281 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.223628 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.724422 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.224601 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.335390 2903914 kubeadm.go:1113] duration metric: took 4.275959417s to wait for elevateKubeSystemPrivileges
	I0923 11:36:38.335426 2903914 kubeadm.go:394] duration metric: took 18.639429725s to StartCluster
	I0923 11:36:38.335446 2903914 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336106 2903914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:36:38.336533 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336743 2903914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:38.336888 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:36:38.337142 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.337173 2903914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:36:38.337257 2903914 addons.go:69] Setting yakd=true in profile "addons-348379"
	I0923 11:36:38.337273 2903914 addons.go:234] Setting addon yakd=true in "addons-348379"
	I0923 11:36:38.337299 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.337814 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338086 2903914 addons.go:69] Setting inspektor-gadget=true in profile "addons-348379"
	I0923 11:36:38.338109 2903914 addons.go:234] Setting addon inspektor-gadget=true in "addons-348379"
	I0923 11:36:38.338133 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338208 2903914 addons.go:69] Setting metrics-server=true in profile "addons-348379"
	I0923 11:36:38.338232 2903914 addons.go:234] Setting addon metrics-server=true in "addons-348379"
	I0923 11:36:38.338262 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338580 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338757 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.339010 2903914 addons.go:69] Setting cloud-spanner=true in profile "addons-348379"
	I0923 11:36:38.339029 2903914 addons.go:234] Setting addon cloud-spanner=true in "addons-348379"
	I0923 11:36:38.339055 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.339531 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.343469 2903914 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-348379"
	I0923 11:36:38.343503 2903914 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-348379"
	I0923 11:36:38.345984 2903914 out.go:177] * Verifying Kubernetes components...
	I0923 11:36:38.346027 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.346500 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.348537 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:38.345645 2903914 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348379"
	I0923 11:36:38.365526 2903914 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:38.365567 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.366036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345659 2903914 addons.go:69] Setting default-storageclass=true in profile "addons-348379"
	I0923 11:36:38.373074 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348379"
	I0923 11:36:38.373503 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345667 2903914 addons.go:69] Setting gcp-auth=true in profile "addons-348379"
	I0923 11:36:38.384131 2903914 mustload.go:65] Loading cluster: addons-348379
	I0923 11:36:38.384331 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.384582 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345675 2903914 addons.go:69] Setting ingress=true in profile "addons-348379"
	I0923 11:36:38.394024 2903914 addons.go:234] Setting addon ingress=true in "addons-348379"
	I0923 11:36:38.394117 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.345679 2903914 addons.go:69] Setting ingress-dns=true in profile "addons-348379"
	I0923 11:36:38.401504 2903914 addons.go:234] Setting addon ingress-dns=true in "addons-348379"
	I0923 11:36:38.404456 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.405051 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345945 2903914 addons.go:69] Setting registry=true in profile "addons-348379"
	I0923 11:36:38.410134 2903914 addons.go:234] Setting addon registry=true in "addons-348379"
	I0923 11:36:38.410210 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.411036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345953 2903914 addons.go:69] Setting storage-provisioner=true in profile "addons-348379"
	I0923 11:36:38.345956 2903914 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-348379"
	I0923 11:36:38.345960 2903914 addons.go:69] Setting volcano=true in profile "addons-348379"
	I0923 11:36:38.345964 2903914 addons.go:69] Setting volumesnapshots=true in profile "addons-348379"
	I0923 11:36:38.411256 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.454084 2903914 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:36:38.460897 2903914 addons.go:234] Setting addon storage-provisioner=true in "addons-348379"
	I0923 11:36:38.460953 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.461441 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.478144 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:36:38.478222 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:36:38.478322 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.487269 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-348379"
	I0923 11:36:38.487639 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.517904 2903914 addons.go:234] Setting addon volcano=true in "addons-348379"
	I0923 11:36:38.517966 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.518467 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.546347 2903914 addons.go:234] Setting addon volumesnapshots=true in "addons-348379"
	I0923 11:36:38.546411 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.546987 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.575673 2903914 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:36:38.575914 2903914 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:36:38.579125 2903914 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:38.579150 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:36:38.579221 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.579592 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:36:38.579639 2903914 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:36:38.579689 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.617860 2903914 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:36:38.622150 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:36:38.622177 2903914 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:36:38.622251 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.622735 2903914 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:36:38.624431 2903914 addons.go:234] Setting addon default-storageclass=true in "addons-348379"
	I0923 11:36:38.624466 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.628278 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.632340 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.636671 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:36:38.637544 2903914 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:38.637561 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:36:38.637622 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.655509 2903914 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:36:38.662366 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:36:38.665082 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:36:38.665107 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:36:38.665177 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.670015 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:36:38.676295 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:38.676385 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:36:38.676464 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.683562 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:36:38.686396 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:38.686419 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:36:38.686489 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.712304 2903914 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-348379"
	I0923 11:36:38.712346 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.712766 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.727410 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:36:38.730115 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:36:38.735471 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 11:36:38.735719 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.757102 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.760000 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:36:38.766393 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:36:38.766549 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:36:38.769513 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:36:38.771099 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.774083 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:36:38.774459 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:38.774507 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:36:38.774615 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.780134 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 11:36:38.788013 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:36:38.790672 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:36:38.794853 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:36:38.794879 2903914 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:36:38.794943 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.799096 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:36:38.799171 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:36:38.799267 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.815113 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 11:36:38.821525 2903914 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:38.821606 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 11:36:38.822703 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.827569 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850382 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850811 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.867021 2903914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:38.867042 2903914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:36:38.867113 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.883391 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.898738 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.922806 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.939666 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.947337 2903914 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:36:38.950063 2903914 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:36:38.952600 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:38.952625 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:36:38.952695 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.965723 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.981816 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.987132 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.991897 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	W0923 11:36:39.007637 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007696 2903914 retry.go:31] will retry after 239.86918ms: ssh: handshake failed: EOF
	W0923 11:36:39.007745 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007761 2903914 retry.go:31] will retry after 150.66552ms: ssh: handshake failed: EOF
	I0923 11:36:39.023482 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.023945 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.169972 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:39.170259 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:36:39.494971 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:39.556492 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:39.608428 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:39.623138 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:36:39.623206 2903914 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:36:39.709728 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:36:39.709758 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:36:39.777118 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:39.827433 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:36:39.827460 2903914 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:36:39.863119 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:39.870165 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:36:39.870234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:36:39.883044 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:39.890345 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:39.896655 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:36:39.896731 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:36:39.967387 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:36:39.967475 2903914 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:36:40.018137 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:36:40.018231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:36:40.030206 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:36:40.030287 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:36:40.050639 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:40.058613 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.058708 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:36:40.150893 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:36:40.151005 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:36:40.242616 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:36:40.242710 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:36:40.287048 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:36:40.287084 2903914 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:36:40.486475 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:36:40.486504 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:36:40.493831 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.494920 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:36:40.494943 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:36:40.557986 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.558018 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:36:40.619083 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:40.619108 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:36:40.657629 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:36:40.657660 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:36:40.734595 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:36:40.734624 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:36:40.801552 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:36:40.801595 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:36:40.865890 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.930189 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:36:40.930231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:36:40.951956 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:36:40.951998 2903914 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:36:40.961846 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:36:40.961876 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:36:40.972100 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:41.216224 2903914 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.216250 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:36:41.260577 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:36:41.260621 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:36:41.278445 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.294750 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:36:41.294791 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:36:41.323456 2903914 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.153148456s)
	I0923 11:36:41.323525 2903914 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 11:36:41.323500 2903914 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.153503843s)
	I0923 11:36:41.324480 2903914 node_ready.go:35] waiting up to 6m0s for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330122 2903914 node_ready.go:49] node "addons-348379" has status "Ready":"True"
	I0923 11:36:41.330152 2903914 node_ready.go:38] duration metric: took 5.643428ms for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330163 2903914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:36:41.339728 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:41.543987 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:36:41.544058 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:36:41.648713 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.648741 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:36:41.746379 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:36:41.746407 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:36:41.828842 2903914 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-348379" context rescaled to 1 replicas
	I0923 11:36:41.902914 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.921919 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.42690557s)
	I0923 11:36:41.921979 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.365461937s)
	I0923 11:36:41.922006 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.313555468s)
	I0923 11:36:42.029240 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:36:42.029272 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:36:42.347806 2903914 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347883 2903914 pod_ready.go:82] duration metric: took 1.008114432s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	E0923 11:36:42.347920 2903914 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347968 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:42.549396 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:36:42.549431 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:36:42.830524 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:42.830560 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:36:43.211480 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:44.381316 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:44.884644 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.107485626s)
	I0923 11:36:45.843349 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:36:45.843447 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:45.872465 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.470866 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:36:46.702197 2903914 addons.go:234] Setting addon gcp-auth=true in "addons-348379"
	I0923 11:36:46.702270 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:46.702828 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:46.733217 2903914 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:36:46.733280 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:46.775337 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.854788 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:47.501824 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.638626321s)
	I0923 11:36:47.501900 2903914 addons.go:475] Verifying addon ingress=true in "addons-348379"
	I0923 11:36:47.504030 2903914 out.go:177] * Verifying ingress addon...
	I0923 11:36:47.506684 2903914 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:36:47.511450 2903914 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:36:47.511576 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.013380 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.544278 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.902915 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:49.026130 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.515743 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.786183 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.903059538s)
	I0923 11:36:49.786253 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.89583004s)
	I0923 11:36:49.786324 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.735598298s)
	I0923 11:36:49.786361 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.292505813s)
	I0923 11:36:49.786376 2903914 addons.go:475] Verifying addon registry=true in "addons-348379"
	I0923 11:36:49.786562 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.920643746s)
	I0923 11:36:49.786579 2903914 addons.go:475] Verifying addon metrics-server=true in "addons-348379"
	I0923 11:36:49.786620 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.814490533s)
	I0923 11:36:49.786930 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.508442064s)
	W0923 11:36:49.786964 2903914 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.786984 2903914 retry.go:31] will retry after 231.122068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.787069 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.884124931s)
	I0923 11:36:49.788510 2903914 out.go:177] * Verifying registry addon...
	I0923 11:36:49.789825 2903914 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-348379 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:36:49.793761 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:36:49.857501 2903914 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:36:49.857529 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.019003 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:50.091953 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.88041396s)
	I0923 11:36:50.091996 2903914 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:50.092188 2903914 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.35893014s)
	I0923 11:36:50.094620 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:50.094707 2903914 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:36:50.096428 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.099435 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:36:50.101290 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:36:50.102965 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:36:50.103035 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:36:50.192143 2903914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:36:50.192217 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.253203 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:36:50.253267 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:36:50.330971 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.356163 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.356234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:36:50.430758 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.512255 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.604722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.797707 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.023077 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.104450 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.297470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.356586 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:51.511733 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.617197 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.799992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.872269 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853212545s)
	I0923 11:36:51.872449 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441621639s)
	I0923 11:36:51.875653 2903914 addons.go:475] Verifying addon gcp-auth=true in "addons-348379"
	I0923 11:36:51.880701 2903914 out.go:177] * Verifying gcp-auth addon...
	I0923 11:36:51.886238 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:36:51.899994 2903914 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:36:52.012221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.113517 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.299559 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.512212 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.605423 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.799422 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.854916 2903914 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.854943 2903914 pod_ready.go:82] duration metric: took 10.506940522s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.854956 2903914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861027 2903914 pod_ready.go:93] pod "etcd-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.861059 2903914 pod_ready.go:82] duration metric: took 6.063045ms for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861112 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867900 2903914 pod_ready.go:93] pod "kube-apiserver-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.867934 2903914 pod_ready.go:82] duration metric: took 6.806328ms for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867947 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875428 2903914 pod_ready.go:93] pod "kube-controller-manager-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.875465 2903914 pod_ready.go:82] duration metric: took 7.477644ms for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875477 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881487 2903914 pod_ready.go:93] pod "kube-proxy-nqbmm" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.881527 2903914 pod_ready.go:82] duration metric: took 6.024203ms for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881558 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.013082 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.115359 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.261367 2903914 pod_ready.go:93] pod "kube-scheduler-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:53.261440 2903914 pod_ready.go:82] duration metric: took 379.865643ms for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.261468 2903914 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.298690 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:53.511926 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.605400 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.806273 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.013143 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.105281 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.297943 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.511688 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.604819 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.801146 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.012597 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.104977 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.267783 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:55.297710 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.513110 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.604704 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.797620 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.013461 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.105679 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.298722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.511968 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.607325 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.804470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.011592 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.104763 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.298808 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.511851 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.612444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.768189 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:57.797413 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.012279 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.104746 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.298054 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.511885 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.604675 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.801111 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.014365 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.115071 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.299589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.511922 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.604297 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.768227 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:59.798277 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.038178 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.105635 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.333590 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.512357 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.604689 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.799851 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.011394 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.105032 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.310842 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.511596 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.605046 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.768750 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:01.798428 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.013963 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.113737 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.297934 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.511024 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.605393 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.798053 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.012529 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.104923 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.298010 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.512557 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.604282 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.768828 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:03.798455 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.015940 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.104743 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.299059 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.512309 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.606024 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.798339 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.019138 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.104829 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.298375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.511973 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.604665 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.767100 2903914 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:05.767130 2903914 pod_ready.go:82] duration metric: took 12.505640545s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:05.767142 2903914 pod_ready.go:39] duration metric: took 24.436967089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:05.767156 2903914 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:05.767223 2903914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:05.781985 2903914 api_server.go:72] duration metric: took 27.445202984s to wait for apiserver process to appear ...
	I0923 11:37:05.782061 2903914 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:05.782092 2903914 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 11:37:05.789746 2903914 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 11:37:05.790754 2903914 api_server.go:141] control plane version: v1.31.1
	I0923 11:37:05.790781 2903914 api_server.go:131] duration metric: took 8.705461ms to wait for apiserver health ...
	I0923 11:37:05.790793 2903914 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:05.800773 2903914 system_pods.go:59] 18 kube-system pods found
	I0923 11:37:05.800813 2903914 system_pods.go:61] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.800824 2903914 system_pods.go:61] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.800829 2903914 system_pods.go:61] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.800839 2903914 system_pods.go:61] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.800844 2903914 system_pods.go:61] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.800848 2903914 system_pods.go:61] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.800852 2903914 system_pods.go:61] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.800856 2903914 system_pods.go:61] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.800860 2903914 system_pods.go:61] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.800866 2903914 system_pods.go:61] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.800870 2903914 system_pods.go:61] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.800875 2903914 system_pods.go:61] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.800884 2903914 system_pods.go:61] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.800892 2903914 system_pods.go:61] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.800908 2903914 system_pods.go:61] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.800916 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800924 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800931 2903914 system_pods.go:61] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.800938 2903914 system_pods.go:74] duration metric: took 10.139277ms to wait for pod list to return data ...
	I0923 11:37:05.800948 2903914 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:37:05.802797 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.803949 2903914 default_sa.go:45] found service account: "default"
	I0923 11:37:05.803977 2903914 default_sa.go:55] duration metric: took 3.018472ms for default service account to be created ...
	I0923 11:37:05.803986 2903914 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:37:05.813725 2903914 system_pods.go:86] 18 kube-system pods found
	I0923 11:37:05.813761 2903914 system_pods.go:89] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.813771 2903914 system_pods.go:89] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.813777 2903914 system_pods.go:89] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.813785 2903914 system_pods.go:89] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.813789 2903914 system_pods.go:89] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.813793 2903914 system_pods.go:89] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.813798 2903914 system_pods.go:89] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.813810 2903914 system_pods.go:89] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.813815 2903914 system_pods.go:89] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.813824 2903914 system_pods.go:89] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.813828 2903914 system_pods.go:89] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.813835 2903914 system_pods.go:89] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.813845 2903914 system_pods.go:89] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.813851 2903914 system_pods.go:89] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.813859 2903914 system_pods.go:89] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.813866 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813877 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813881 2903914 system_pods.go:89] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.813889 2903914 system_pods.go:126] duration metric: took 9.896579ms to wait for k8s-apps to be running ...
	I0923 11:37:05.813904 2903914 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:37:05.813964 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:37:05.826195 2903914 system_svc.go:56] duration metric: took 12.281298ms WaitForService to wait for kubelet
	I0923 11:37:05.826224 2903914 kubeadm.go:582] duration metric: took 27.489446441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:37:05.826246 2903914 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:05.829405 2903914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 11:37:05.829438 2903914 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:05.829451 2903914 node_conditions.go:105] duration metric: took 3.199247ms to run NodePressure ...
	I0923 11:37:05.829481 2903914 start.go:241] waiting for startup goroutines ...
	I0923 11:37:06.016364 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.105089 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.297935 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:06.512579 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.605014 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.797841 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.011801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.104294 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.298088 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.511664 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.604893 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.801940 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.013764 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.105345 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.299766 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.512208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.605989 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.797920 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.013457 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.113285 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.298479 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.519639 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.621187 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.798092 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.023070 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.120870 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.299336 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.511998 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.604502 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.798656 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.011605 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.104357 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.297872 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.510891 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.605182 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.798112 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.016742 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.106392 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.302890 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.514134 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.606249 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.802344 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.012829 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.106630 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.297952 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.520400 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.609375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.799034 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.014344 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.118058 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.298788 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.510806 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.631742 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.797720 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.019826 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.106226 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.298830 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.511804 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.605411 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.798246 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.012028 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.104868 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.297740 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.524665 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.625850 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.797603 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.011338 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.103818 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.298307 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.512577 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.605193 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.798142 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.013543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.105035 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.297589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.512084 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.605341 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.798244 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.012543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.113664 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.298444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.511599 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.606804 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.798859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.017663 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.106755 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.297564 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.512160 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.604958 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.797812 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.013732 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.105616 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.298921 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.510795 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.604820 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.797800 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.012174 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.114441 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.298237 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.513158 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.625275 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.799161 2903914 kapi.go:107] duration metric: took 33.005400732s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:37:23.013096 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.105126 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:23.511910 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.613223 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.014075 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.105028 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.511000 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.605248 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.014495 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.111460 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.512685 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.612274 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.013492 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.105721 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.511127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.605859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.012240 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.105210 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.511957 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.604964 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.012182 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.104813 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.511895 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.605303 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.013053 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.104426 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.511346 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.603992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.014157 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.118062 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.511690 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.604969 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.014750 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.105518 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.511111 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.605002 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.012362 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.104960 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.511542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.604045 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.013753 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.104350 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.512293 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.614483 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:34.016127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:34.104734 2903914 kapi.go:107] duration metric: took 44.005304527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:37:34.510755 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.019903 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.510951 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.014603 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.510966 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.016710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.511532 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.013339 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.511853 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.012335 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.511569 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.013791 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.511129 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.011102 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.510793 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.015629 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.512029 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.011711 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.510801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.011721 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.511165 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.039643 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.511176 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.011883 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.511761 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.011001 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.511048 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.012925 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.511632 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.010792 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.511167 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.018047 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.511473 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.013208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.511710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.011604 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.512230 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.011680 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.511429 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.012725 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.511977 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.013759 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.512155 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.012542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.512221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:57.011684 2903914 kapi.go:107] duration metric: took 1m9.505004156s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:42:51.886491 2903914 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 11:42:51.886551 2903914 kapi.go:107] duration metric: took 6m0.000318633s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 11:42:51.886644 2903914 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 11:42:51.888644 2903914 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0923 11:42:51.890515 2903914 addons.go:510] duration metric: took 6m13.553319939s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress]
	I0923 11:42:51.890575 2903914 start.go:246] waiting for cluster config update ...
	I0923 11:42:51.890600 2903914 start.go:255] writing updated cluster config ...
	I0923 11:42:51.890918 2903914 ssh_runner.go:195] Run: rm -f paused
	I0923 11:42:52.246444 2903914 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:42:52.248672 2903914 out.go:177] * Done! kubectl is now configured to use "addons-348379" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b6876ad1ce80c       4f725bf50aaa5       27 seconds ago      Exited              gadget                                   7                   1161771f95540       gadget-xl5qc
	052af9fc2b4c1       289a818c8d9c5       10 minutes ago      Running             controller                               0                   47eb6c64c7a7d       ingress-nginx-controller-bc57996ff-6wq45
	967992dcbaf35       ee6d597e62dc8       11 minutes ago      Running             csi-snapshotter                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c27f769c6bce0       642ded511e141       11 minutes ago      Running             csi-provisioner                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	0faa259c6b420       922312104da8a       11 minutes ago      Running             liveness-probe                           0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	994b2f0a53275       08f6b2990811a       11 minutes ago      Running             hostpath                                 0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	ccaa6826447e2       9a80d518f102c       11 minutes ago      Running             csi-attacher                             0                   3c219582bf398       csi-hostpath-attacher-0
	63bc3e7f0cc00       420193b27261a       11 minutes ago      Exited              patch                                    0                   6fd7a092c3487       ingress-nginx-admission-patch-bfw56
	429fb2006ad16       77bdba588b953       11 minutes ago      Running             yakd                                     0                   c9ad9c335e299       yakd-dashboard-67d98fc6b-j4xcr
	c5db857da31e6       4d1e5c3e97420       11 minutes ago      Running             volume-snapshot-controller               0                   311300b7a364c       snapshot-controller-56fcc65765-d47ng
	f35fa36f73991       c9cf76bb104e1       11 minutes ago      Running             registry                                 0                   f3640fdf0d54f       registry-66c9cd494c-fhm8g
	d1ac50a6261dd       420193b27261a       11 minutes ago      Exited              create                                   0                   895d61d9c16ad       ingress-nginx-admission-create-fwt6v
	ac4d50ae15f93       4d1e5c3e97420       11 minutes ago      Running             volume-snapshot-controller               0                   698a149c7f280       snapshot-controller-56fcc65765-dchr7
	ed38cab12122b       0107d56dbc0be       11 minutes ago      Running             node-driver-registrar                    0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	ba68702d656c2       3410e1561990a       11 minutes ago      Running             registry-proxy                           0                   dd955d0d91d1c       registry-proxy-7qmf5
	6781c22ae4a4e       5548a49bb60ba       11 minutes ago      Running             metrics-server                           0                   ffa74d42d59a0       metrics-server-84c5f94fbc-dgpbq
	5e5a8e69b34fd       7ce2150c8929b       11 minutes ago      Running             local-path-provisioner                   0                   9bc0d2214e2a7       local-path-provisioner-86d989889c-h5pl9
	f32c490729dc3       be9cac3585579       11 minutes ago      Running             cloud-spanner-emulator                   0                   b7452ccbf189d       cloud-spanner-emulator-5b584cc74-lbht7
	7bd548d7da390       a9bac31a5be8d       11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   5582827316900       nvidia-device-plugin-daemonset-xqqn9
	cecddb85ce0f0       487fa743e1e22       11 minutes ago      Running             csi-resizer                              0                   85d2097f2da83       csi-hostpath-resizer-0
	a6cc88c765de7       1461903ec4fe9       11 minutes ago      Running             csi-external-health-monitor-controller   0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c37000ef28652       35508c2f890c4       12 minutes ago      Running             minikube-ingress-dns                     0                   d881770814e2f       kube-ingress-dns-minikube
	497f8c41b274e       2f6c962e7b831       12 minutes ago      Running             coredns                                  0                   d2d48d489a636       coredns-7c65d6cfc9-ppz9h
	b05bd4b18e280       ba04bb24b9575       12 minutes ago      Running             storage-provisioner                      0                   e816c2d6b5461       storage-provisioner
	a7e55167b7b39       6a23fa8fd2b78       12 minutes ago      Running             kindnet-cni                              0                   9389ee1ac67f5       kindnet-4kcdh
	005547c4c4723       24a140c548c07       12 minutes ago      Running             kube-proxy                               0                   a4defdda67173       kube-proxy-nqbmm
	9255b7a6f4a59       7f8aa378bb47d       12 minutes ago      Running             kube-scheduler                           0                   cae78edec7e41       kube-scheduler-addons-348379
	92413a7a8d6f6       279f381cb3736       12 minutes ago      Running             kube-controller-manager                  0                   731c7b79dc027       kube-controller-manager-addons-348379
	20e5f68e09619       d3f53a98c0a9d       12 minutes ago      Running             kube-apiserver                           0                   bdcb66b095a48       kube-apiserver-addons-348379
	8abc59946512a       27e3830e14027       12 minutes ago      Running             etcd                                     0                   a8c7d184a28f5       etcd-addons-348379
	
	
	==> containerd <==
	Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.117587538Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\""
	Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.120581872Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.175115402Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized"
	Sep 23 11:48:09 addons-348379 containerd[816]: time="2024-09-23T11:48:09.175227787Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: active requests=0, bytes read=0"
	Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.118249869Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\""
	Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.120520226Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.171933737Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized"
	Sep 23 11:48:25 addons-348379 containerd[816]: time="2024-09-23T11:48:25.171975181Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: active requests=0, bytes read=0"
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.118491960Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.241710606Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.243226357Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.247087403Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 128.547919ms"
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.247131677Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.249056477Z" level=info msg="CreateContainer within sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" for container &ContainerMetadata{Name:gadget,Attempt:7,}"
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.268117104Z" level=info msg="CreateContainer within sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" for &ContainerMetadata{Name:gadget,Attempt:7,} returns container id \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\""
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.268809400Z" level=info msg="StartContainer for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\""
	Sep 23 11:48:26 addons-348379 containerd[816]: time="2024-09-23T11:48:26.321530401Z" level=info msg="StartContainer for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" returns successfully"
	Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.653119197Z" level=error msg="ExecSync for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" failed" error="failed to exec in container: failed to start exec \"dc49331fb3f2c4a26a2f9e86949a3baefd68bd73ef3d1c82061bf43b68102fc7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.684794612Z" level=error msg="ExecSync for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" failed" error="failed to exec in container: failed to start exec \"019747eb7e72676a24024480f39aacf90192237b8d6124c9788b43ea4ceadfb1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.695931597Z" level=error msg="ExecSync for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" failed" error="failed to exec in container: failed to start exec \"cf8c091a6bd1a53b2cba362cad1080b3f4032f10863e843ca68860375e3096b9\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.819898227Z" level=info msg="shim disconnected" id=b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd namespace=k8s.io
	Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.819956180Z" level=warning msg="cleaning up after shim disconnected" id=b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd namespace=k8s.io
	Sep 23 11:48:27 addons-348379 containerd[816]: time="2024-09-23T11:48:27.819967503Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 11:48:28 addons-348379 containerd[816]: time="2024-09-23T11:48:28.423586349Z" level=info msg="RemoveContainer for \"c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab\""
	Sep 23 11:48:28 addons-348379 containerd[816]: time="2024-09-23T11:48:28.431878571Z" level=info msg="RemoveContainer for \"c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab\" returns successfully"
	
	
	==> coredns [497f8c41b274e14c72d9933f13fac36b6c4acf8def62c9f3205823623e58226d] <==
	[INFO] 10.244.0.8:44197 - 54267 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180151s
	[INFO] 10.244.0.8:41305 - 6638 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001704017s
	[INFO] 10.244.0.8:41305 - 35565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141999s
	[INFO] 10.244.0.8:40738 - 27223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080918s
	[INFO] 10.244.0.8:40738 - 10324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129353s
	[INFO] 10.244.0.8:56286 - 60697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080098s
	[INFO] 10.244.0.8:56286 - 24093 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043076s
	[INFO] 10.244.0.8:43089 - 14336 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004978s
	[INFO] 10.244.0.8:43089 - 63494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035692s
	[INFO] 10.244.0.8:57209 - 23569 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041272s
	[INFO] 10.244.0.8:57209 - 18191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041304s
	[INFO] 10.244.0.8:58538 - 8344 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004349043s
	[INFO] 10.244.0.8:58538 - 22686 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003876717s
	[INFO] 10.244.0.8:48601 - 29559 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050166s
	[INFO] 10.244.0.8:48601 - 628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054556s
	[INFO] 10.244.0.8:43467 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093161s
	[INFO] 10.244.0.8:43467 - 32245 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042002s
	[INFO] 10.244.0.8:54486 - 7532 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066798s
	[INFO] 10.244.0.8:54486 - 50287 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038416s
	[INFO] 10.244.0.8:33247 - 25963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052963s
	[INFO] 10.244.0.8:33247 - 9581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
	[INFO] 10.244.0.8:42814 - 61299 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001881616s
	[INFO] 10.244.0.8:42814 - 6802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001453827s
	[INFO] 10.244.0.8:59224 - 13301 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046384s
	[INFO] 10.244.0.8:59224 - 30455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035372s
	
	
	==> describe nodes <==
	Name:               addons-348379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-348379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=addons-348379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-348379
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-348379"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:36:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-348379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:48:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-348379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb92b9be866447b6af2ef85f12013e1b
	  System UUID:                84447a6a-e17a-42ff-ba46-fb82e93bc172
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-lbht7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-xl5qc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6wq45    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-ppz9h                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-zdwf8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-348379                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-4kcdh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-348379                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-348379       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nqbmm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-348379                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-dgpbq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-xqqn9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-66c9cd494c-fhm8g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-7qmf5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-d47ng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-dchr7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-h5pl9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-7f54bd7598-s85bg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-init-f2bhm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-controllers-5ff7c5d4db-w658s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-scheduler-79dc4b78bb-2vx88          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j4xcr              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-348379 event: Registered Node addons-348379 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [8abc59946512ab5ab4d902de194545051c7e577bd5196212bf85326ca705cd43] <==
	{"level":"info","ts":"2024-09-23T11:36:27.038394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-23T11:36:27.038527Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-23T11:36:27.823331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.825628Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-348379 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:36:27.825777Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827732Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.831921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.856413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.847318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:36:27.847973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.857537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.856653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:28.587946Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1709}
	{"level":"info","ts":"2024-09-23T11:46:28.663638Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1709,"took":"75.159252ms","hash":39109330,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4308992,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T11:46:28.663685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":39109330,"revision":1709,"compact-revision":-1}
	
	
	==> kernel <==
	 11:48:54 up 1 day, 19:31,  0 users,  load average: 0.54, 0.52, 1.53
	Linux addons-348379 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a7e55167b7b39430011e4998f44d30eaec7edc15414dc232724161a09a27e599] <==
	I0923 11:46:50.320018       1 main.go:299] handling current node
	I0923 11:47:00.315432       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:47:00.315477       1 main.go:299] handling current node
	I0923 11:47:10.312153       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:47:10.312207       1 main.go:299] handling current node
	I0923 11:47:20.315766       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:47:20.315803       1 main.go:299] handling current node
	I0923 11:47:30.315485       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:47:30.315722       1 main.go:299] handling current node
	I0923 11:47:40.312339       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:47:40.312385       1 main.go:299] handling current node
	I0923 11:47:50.321041       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:47:50.321081       1 main.go:299] handling current node
	I0923 11:48:00.327253       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:00.327544       1 main.go:299] handling current node
	I0923 11:48:10.312132       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:10.312166       1 main.go:299] handling current node
	I0923 11:48:20.312412       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:20.312446       1 main.go:299] handling current node
	I0923 11:48:30.312298       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:30.312427       1 main.go:299] handling current node
	I0923 11:48:40.312125       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:40.312158       1 main.go:299] handling current node
	I0923 11:48:50.320905       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:50.320946       1 main.go:299] handling current node
	
	
	==> kube-apiserver [20e5f68e09619b9d622e831b0c429aae0e245dfb5c647d9e6fd9193c6cdfedac] <==
	W0923 11:44:54.755857       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:44:54.883323       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:44:54.883370       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:44:54.884993       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:45:54.765261       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:45:54.765303       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:45:54.767161       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:45:54.891992       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:45:54.892034       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:45:54.893739       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:46:54.776480       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:46:54.776526       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:46:54.778226       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:46:54.899951       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:46:54.900006       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:46:54.901791       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:47:47.180225       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:47:47.180267       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:47:47.182017       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:47:54.783948       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:47:54.783991       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:47:54.785615       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:47:54.908274       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:47:54.908316       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:47:54.909972       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	
	
	==> kube-controller-manager [92413a7a8d6f6d058e961a0759535062b02d7d50e5012e81614e6290ce3465b3] <==
	E0923 11:44:54.756577       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:44:54.757872       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:44:54.885583       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:44:54.886775       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:45:54.767939       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:45:54.769008       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:45:54.894366       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:45:54.895439       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:46:54.778891       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:46:54.780011       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:46:54.902541       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:46:54.903797       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:47:47.182800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.665714ms"
	E0923 11:47:47.182839       1 replica_set.go:560] "Unhandled Error" err="sync \"gcp-auth/gcp-auth-89d5ffd79\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.786207       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.787446       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.910691       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.912612       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:48:17.574828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-348379"
	I0923 11:48:20.132889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="65.493µs"
	I0923 11:48:20.146254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="49.452µs"
	I0923 11:48:33.131858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="68.085µs"
	I0923 11:48:35.131468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="44.004µs"
	I0923 11:48:38.129745       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 11:48:52.131254       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	
	
	==> kube-proxy [005547c4c4723cd6ce4dac939ebab2b2d89e428b99ec971a179497842dcb5abe] <==
	I0923 11:36:39.711887       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:36:39.808570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:36:39.808647       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:36:39.869255       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:36:39.869320       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:36:39.872284       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:36:39.872851       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:36:39.872865       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:36:39.885041       1 config.go:199] "Starting service config controller"
	I0923 11:36:39.885075       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:36:39.885100       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:36:39.885105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:36:39.886870       1 config.go:328] "Starting node config controller"
	I0923 11:36:39.886882       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:36:39.986150       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:36:39.986235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:36:39.987256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9255b7a6f4a590493be04a5bbbdf14b0efddfe1f321b8a25d2eed1055c6741df] <==
	W0923 11:36:31.587742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:36:31.587843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:36:31.587943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:36:31.588052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:36:31.588221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.588380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:36:31.588540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:36:31.588655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:36:31.588819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:36:31.588921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:36:31.589185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:36:32.574074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:48:20 addons-348379 kubelet[1462]: E0923 11:48:20.117988    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:48:20 addons-348379 kubelet[1462]: E0923 11:48:20.118030    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:48:22 addons-348379 kubelet[1462]: I0923 11:48:22.116350    1462 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-fhm8g" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.172344    1462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.172431    1462 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.173095    1462 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:main,Image:docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e,Command:[./gen-admission-secret.sh --service volcano-admission-service --namespace volcano-system --secret volcano-admission-secret],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessageP
olicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-admission-init-f2bhm_volcano-system(5ae35267-183a-42ba-96bc-03dac14139ac): ErrImagePull: failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" logger="UnhandledError"
	Sep 23 11:48:25 addons-348379 kubelet[1462]: E0923 11:48:25.174440    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\": failed to resolve reference \\\"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:48:26 addons-348379 kubelet[1462]: I0923 11:48:26.116758    1462 scope.go:117] "RemoveContainer" containerID="c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab"
	Sep 23 11:48:27 addons-348379 kubelet[1462]: E0923 11:48:27.653857    1462 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"dc49331fb3f2c4a26a2f9e86949a3baefd68bd73ef3d1c82061bf43b68102fc7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 11:48:27 addons-348379 kubelet[1462]: E0923 11:48:27.685041    1462 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"019747eb7e72676a24024480f39aacf90192237b8d6124c9788b43ea4ceadfb1\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 11:48:27 addons-348379 kubelet[1462]: E0923 11:48:27.696184    1462 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"cf8c091a6bd1a53b2cba362cad1080b3f4032f10863e843ca68860375e3096b9\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 23 11:48:28 addons-348379 kubelet[1462]: I0923 11:48:28.116782    1462 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xqqn9" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 11:48:28 addons-348379 kubelet[1462]: I0923 11:48:28.416858    1462 scope.go:117] "RemoveContainer" containerID="c320da8356341a0dbcce4b452c8bac9e58aa8b49392b7faba1379fcdc1450bab"
	Sep 23 11:48:28 addons-348379 kubelet[1462]: I0923 11:48:28.417440    1462 scope.go:117] "RemoveContainer" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd"
	Sep 23 11:48:28 addons-348379 kubelet[1462]: E0923 11:48:28.418317    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:48:31 addons-348379 kubelet[1462]: I0923 11:48:31.107136    1462 scope.go:117] "RemoveContainer" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd"
	Sep 23 11:48:31 addons-348379 kubelet[1462]: E0923 11:48:31.107852    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:48:33 addons-348379 kubelet[1462]: E0923 11:48:33.118414    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:48:35 addons-348379 kubelet[1462]: E0923 11:48:35.117742    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:48:38 addons-348379 kubelet[1462]: E0923 11:48:38.118766    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:48:43 addons-348379 kubelet[1462]: I0923 11:48:43.117018    1462 scope.go:117] "RemoveContainer" containerID="b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd"
	Sep 23 11:48:43 addons-348379 kubelet[1462]: E0923 11:48:43.117220    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:48:46 addons-348379 kubelet[1462]: E0923 11:48:46.117696    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:48:47 addons-348379 kubelet[1462]: E0923 11:48:47.117833    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:48:52 addons-348379 kubelet[1462]: E0923 11:48:52.117495    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	
	
	==> storage-provisioner [b05bd4b18e2804c1706af6011e03e349f643f23d7f968ca74ffb0f2eaf78047d] <==
	I0923 11:36:43.550112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:36:43.591903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:36:43.592066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:36:43.604224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:36:43.604420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	I0923 11:36:43.605459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdf468f-763c-4a54-98e3-d90ea0e2e8e5", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348379_e926b365-1f0f-4822-899e-75d077991921 became leader
	I0923 11:36:43.707026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
helpers_test.go:261: (dbg) Run:  kubectl --context addons-348379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1 (104.979406ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fwt6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bfw56" not found
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-s85bg" not found
	Error from server (NotFound): pods "volcano-admission-init-f2bhm" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-w658s" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-2vx88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1
--- FAIL: TestAddons/serial/Volcano (363.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (45.28s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-348379 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (86.718202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
I0923 11:48:55.836261 2903158 retry.go:31] will retry after 1.749605851s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (83.865169ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
I0923 11:48:57.754761 2903158 retry.go:31] will retry after 4.37871502s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (81.287892ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
I0923 11:49:02.303981 2903158 retry.go:31] will retry after 5.027209164s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (85.551611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
I0923 11:49:07.502599 2903158 retry.go:31] will retry after 5.933931853s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (81.147487ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
I0923 11:49:13.602838 2903158 retry.go:31] will retry after 8.452477782s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (85.569744ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
I0923 11:49:22.223467 2903158 retry.go:31] will retry after 18.461147541s: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
addons_test.go:608: (dbg) Run:  kubectl --context addons-348379 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-348379 get secret gcp-auth -n new-namespace: exit status 1 (89.964294ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-348379 logs -l app=gcp-auth -n gcp-auth
addons_test.go:616: failed to get secret: %!w(<nil>): gcp-auth container logs: 
** stderr ** 
	No resources found in gcp-auth namespace.

                                                
                                                
** /stderr **
--- FAIL: TestAddons/serial/GCPAuth/Namespaces (45.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.004726ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004393782s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006146317s
addons_test.go:338: (dbg) Run:  kubectl --context addons-348379 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-348379 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-348379 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (93.747875ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-348379 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 ip
2024/09/23 11:49:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-348379
helpers_test.go:235: (dbg) docker inspect addons-348379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751",
	        "Created": "2024-09-23T11:36:14.086186806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2904413,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:36:14.226479889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hostname",
	        "HostsPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hosts",
	        "LogPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751-json.log",
	        "Name": "/addons-348379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-348379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-348379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-348379",
	                "Source": "/var/lib/docker/volumes/addons-348379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-348379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-348379",
	                "name.minikube.sigs.k8s.io": "addons-348379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0be556c33c7ef40a6e70f0f396b5b7933e0b3ad3ca535a0d80fe041626578e74",
	            "SandboxKey": "/var/run/docker/netns/0be556c33c7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-348379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d5b669f6f456e67aee85da3145f607fecadf2af36a162932dd5e9bc9ffffee31",
	                    "EndpointID": "406dde2b8517c79d36b643a1ab3c5c13c7554eb0ba705342617104596090b341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-348379",
	                        "1973e07b6a14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-348379 -n addons-348379
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 logs -n 25: (1.566571698s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-611017              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -o=json --download-only              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-423730              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | download-docker-021793               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-021793            | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | binary-mirror-046209                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34157               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-046209              | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| addons  | enable dashboard -p                  | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| start   | -p addons-348379 --wait=true         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| ip      | addons-348379 ip                     | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:49.628900 2903914 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:49.629020 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629030 2903914 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:49.629036 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629290 2903914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 11:35:49.629765 2903914 out.go:352] Setting JSON to false
	I0923 11:35:49.630698 2903914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155897,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:49.630769 2903914 start.go:139] virtualization:  
	I0923 11:35:49.632856 2903914 out.go:177] * [addons-348379] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:35:49.634664 2903914 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:35:49.634735 2903914 notify.go:220] Checking for updates...
	I0923 11:35:49.637703 2903914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:49.639443 2903914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:49.640961 2903914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:49.642654 2903914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 11:35:49.644154 2903914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:35:49.646005 2903914 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:49.674816 2903914 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:49.674959 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.740942 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.731429543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.741058 2903914 docker.go:318] overlay module found
	I0923 11:35:49.742922 2903914 out.go:177] * Using the docker driver based on user configuration
	I0923 11:35:49.744404 2903914 start.go:297] selected driver: docker
	I0923 11:35:49.744427 2903914 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:49.744443 2903914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:35:49.745066 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.807015 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.798069613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.807243 2903914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:49.807490 2903914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:35:49.810585 2903914 out.go:177] * Using Docker driver with root privileges
	I0923 11:35:49.812100 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:35:49.812180 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:49.812195 2903914 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:49.812295 2903914 start.go:340] cluster config:
	{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:49.814088 2903914 out.go:177] * Starting "addons-348379" primary control-plane node in "addons-348379" cluster
	I0923 11:35:49.815629 2903914 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:49.817311 2903914 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:49.818975 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:49.819031 2903914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:49.819044 2903914 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:49.819072 2903914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:49.819129 2903914 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 11:35:49.819140 2903914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 11:35:49.819629 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:35:49.819663 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json: {Name:mk57bf6c9d1a024b95a9182333fb0e843fbdc049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:35:49.834226 2903914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:49.834349 2903914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:49.834370 2903914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:35:49.834376 2903914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:35:49.834383 2903914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:35:49.834388 2903914 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 11:36:07.477009 2903914 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 11:36:07.477055 2903914 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:36:07.477102 2903914 start.go:360] acquireMachinesLock for addons-348379: {Name:mk0afc734c4276635047574670b52ff1624a597d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:07.477241 2903914 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "addons-348379"
	I0923 11:36:07.477273 2903914 start.go:93] Provisioning new machine with config: &{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:07.477361 2903914 start.go:125] createHost starting for "" (driver="docker")
	I0923 11:36:07.479499 2903914 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 11:36:07.479768 2903914 start.go:159] libmachine.API.Create for "addons-348379" (driver="docker")
	I0923 11:36:07.479806 2903914 client.go:168] LocalClient.Create starting
	I0923 11:36:07.479934 2903914 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem
	I0923 11:36:07.656758 2903914 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem
	I0923 11:36:07.895005 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 11:36:07.911037 2903914 cli_runner.go:211] docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 11:36:07.911143 2903914 network_create.go:284] running [docker network inspect addons-348379] to gather additional debugging logs...
	I0923 11:36:07.911165 2903914 cli_runner.go:164] Run: docker network inspect addons-348379
	W0923 11:36:07.926743 2903914 cli_runner.go:211] docker network inspect addons-348379 returned with exit code 1
	I0923 11:36:07.926792 2903914 network_create.go:287] error running [docker network inspect addons-348379]: docker network inspect addons-348379: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-348379 not found
	I0923 11:36:07.926806 2903914 network_create.go:289] output of [docker network inspect addons-348379]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-348379 not found
	
	** /stderr **
	I0923 11:36:07.926904 2903914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:07.941390 2903914 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eb90}
	I0923 11:36:07.941437 2903914 network_create.go:124] attempt to create docker network addons-348379 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 11:36:07.941499 2903914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348379 addons-348379
	I0923 11:36:08.013253 2903914 network_create.go:108] docker network addons-348379 192.168.49.0/24 created
	I0923 11:36:08.013292 2903914 kic.go:121] calculated static IP "192.168.49.2" for the "addons-348379" container
	I0923 11:36:08.013374 2903914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 11:36:08.030080 2903914 cli_runner.go:164] Run: docker volume create addons-348379 --label name.minikube.sigs.k8s.io=addons-348379 --label created_by.minikube.sigs.k8s.io=true
	I0923 11:36:08.048023 2903914 oci.go:103] Successfully created a docker volume addons-348379
	I0923 11:36:08.048128 2903914 cli_runner.go:164] Run: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 11:36:10.056258 2903914 cli_runner.go:217] Completed: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.008084592s)
	I0923 11:36:10.056295 2903914 oci.go:107] Successfully prepared a docker volume addons-348379
	I0923 11:36:10.056323 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:10.056345 2903914 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 11:36:10.056440 2903914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 11:36:14.019670 2903914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.963182399s)
	I0923 11:36:14.019706 2903914 kic.go:203] duration metric: took 3.963357873s to extract preloaded images to volume ...
	W0923 11:36:14.019879 2903914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 11:36:14.020008 2903914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 11:36:14.071499 2903914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348379 --name addons-348379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348379 --network addons-348379 --ip 192.168.49.2 --volume addons-348379:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 11:36:14.406562 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Running}}
	I0923 11:36:14.430276 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.456577 2903914 cli_runner.go:164] Run: docker exec addons-348379 stat /var/lib/dpkg/alternatives/iptables
	I0923 11:36:14.524343 2903914 oci.go:144] the created container "addons-348379" has a running status.
	I0923 11:36:14.524373 2903914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa...
	I0923 11:36:14.817157 2903914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 11:36:14.839902 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.872963 2903914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 11:36:14.872990 2903914 kic_runner.go:114] Args: [docker exec --privileged addons-348379 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 11:36:14.951457 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.976890 2903914 machine.go:93] provisionDockerMachine start ...
	I0923 11:36:14.977005 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.007610 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.007901 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.007913 2903914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:36:15.203560 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.203583 2903914 ubuntu.go:169] provisioning hostname "addons-348379"
	I0923 11:36:15.203659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.229660 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.229941 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.229961 2903914 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-348379 && echo "addons-348379" | sudo tee /etc/hostname
	I0923 11:36:15.387302 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.387387 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.409587 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.409829 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.409846 2903914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-348379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348379/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-348379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:36:15.552128 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:36:15.552218 2903914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 11:36:15.552276 2903914 ubuntu.go:177] setting up certificates
	I0923 11:36:15.552305 2903914 provision.go:84] configureAuth start
	I0923 11:36:15.552432 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:15.571035 2903914 provision.go:143] copyHostCerts
	I0923 11:36:15.571118 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 11:36:15.571374 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 11:36:15.571463 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 11:36:15.571520 2903914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.addons-348379 san=[127.0.0.1 192.168.49.2 addons-348379 localhost minikube]
	I0923 11:36:15.936111 2903914 provision.go:177] copyRemoteCerts
	I0923 11:36:15.936188 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:36:15.936230 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.954080 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.048521 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:36:16.073032 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:36:16.096585 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:36:16.120809 2903914 provision.go:87] duration metric: took 568.476502ms to configureAuth
	I0923 11:36:16.120878 2903914 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:36:16.121066 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:16.121075 2903914 machine.go:96] duration metric: took 1.144158288s to provisionDockerMachine
	I0923 11:36:16.121082 2903914 client.go:171] duration metric: took 8.641266117s to LocalClient.Create
	I0923 11:36:16.121105 2903914 start.go:167] duration metric: took 8.641338888s to libmachine.API.Create "addons-348379"
	I0923 11:36:16.121117 2903914 start.go:293] postStartSetup for "addons-348379" (driver="docker")
	I0923 11:36:16.121127 2903914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:36:16.121180 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:36:16.121219 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.140164 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.237274 2903914 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:36:16.240688 2903914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:36:16.240726 2903914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:36:16.240751 2903914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:36:16.240759 2903914 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:36:16.240772 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 11:36:16.240845 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 11:36:16.240872 2903914 start.go:296] duration metric: took 119.748923ms for postStartSetup
	I0923 11:36:16.241197 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.257321 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:36:16.257608 2903914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:36:16.257659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.273475 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.364420 2903914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:36:16.369417 2903914 start.go:128] duration metric: took 8.892040374s to createHost
	I0923 11:36:16.369444 2903914 start.go:83] releasing machines lock for "addons-348379", held for 8.892189913s
	I0923 11:36:16.369525 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.386496 2903914 ssh_runner.go:195] Run: cat /version.json
	I0923 11:36:16.386558 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.386844 2903914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:36:16.386924 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.402767 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.407504 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.620856 2903914 ssh_runner.go:195] Run: systemctl --version
	I0923 11:36:16.625351 2903914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:36:16.629494 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 11:36:16.656196 2903914 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 11:36:16.656273 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:36:16.685634 2903914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 11:36:16.685658 2903914 start.go:495] detecting cgroup driver to use...
	I0923 11:36:16.685694 2903914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:36:16.685752 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:36:16.698438 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:36:16.709958 2903914 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:36:16.710048 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:36:16.723912 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:36:16.738695 2903914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:36:16.833978 2903914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:36:16.926860 2903914 docker.go:233] disabling docker service ...
	I0923 11:36:16.926964 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:36:16.947106 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:36:16.959548 2903914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:36:17.053558 2903914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:36:17.135034 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:36:17.146613 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:36:17.163902 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:36:17.174292 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:36:17.185041 2903914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:36:17.185112 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:36:17.195986 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.206180 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:36:17.217391 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.228075 2903914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:36:17.237476 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:36:17.247362 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:36:17.257646 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:36:17.267821 2903914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:36:17.276866 2903914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:36:17.286512 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.359977 2903914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:36:17.486930 2903914 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 11:36:17.487093 2903914 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 11:36:17.490604 2903914 start.go:563] Will wait 60s for crictl version
	I0923 11:36:17.490709 2903914 ssh_runner.go:195] Run: which crictl
	I0923 11:36:17.494017 2903914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:36:17.529914 2903914 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 11:36:17.530001 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.553062 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.581066 2903914 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 11:36:17.583092 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:17.598970 2903914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 11:36:17.602709 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.616517 2903914 kubeadm.go:883] updating cluster {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:36:17.616637 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:17.616705 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.653561 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.653588 2903914 containerd.go:534] Images already preloaded, skipping extraction
	I0923 11:36:17.653654 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.689043 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.689069 2903914 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:36:17.689077 2903914 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 11:36:17.689170 2903914 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-348379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:36:17.689240 2903914 ssh_runner.go:195] Run: sudo crictl info
	I0923 11:36:17.725180 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:17.725207 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:17.725219 2903914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:36:17.725244 2903914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348379 NodeName:addons-348379 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:36:17.725401 2903914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-348379"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:36:17.725481 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:36:17.734617 2903914 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:36:17.734696 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:36:17.743298 2903914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 11:36:17.761798 2903914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:36:17.779190 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 11:36:17.797282 2903914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:36:17.801431 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.813470 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.904297 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:17.918201 2903914 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379 for IP: 192.168.49.2
	I0923 11:36:17.918278 2903914 certs.go:194] generating shared ca certs ...
	I0923 11:36:17.918311 2903914 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:17.918478 2903914 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 11:36:18.402482 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt ...
	I0923 11:36:18.402521 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt: {Name:mka24ad8ce2563bd38493ad3048e3b202e9928cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.403346 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key ...
	I0923 11:36:18.403367 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key: {Name:mke301cd867e18ebea9d875f8c02fb489d6a0a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.404035 2903914 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 11:36:18.591619 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt ...
	I0923 11:36:18.591652 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt: {Name:mkf29e9cf8d545d0d33d0ce8b9548c24a316f1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.591849 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key ...
	I0923 11:36:18.591862 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key: {Name:mk38a5ba119b442a98d5a1991cd20b7dc11fb378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.592490 2903914 certs.go:256] generating profile certs ...
	I0923 11:36:18.592562 2903914 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key
	I0923 11:36:18.592583 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt with IP's: []
	I0923 11:36:18.906095 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt ...
	I0923 11:36:18.906134 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: {Name:mk33102a3556c59cf025437aacb3628bfa41ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906340 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key ...
	I0923 11:36:18.906354 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key: {Name:mk1894a7120f896161f07a459fec6eb4fe11e236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906997 2903914 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb
	I0923 11:36:18.907023 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 11:36:19.176533 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb ...
	I0923 11:36:19.176571 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb: {Name:mkf3902710f18b86666bddc46eb9d246a2fd9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177433 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb ...
	I0923 11:36:19.177458 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb: {Name:mkcc729dc3699800a37a33c607924c19bb2a2d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177602 2903914 certs.go:381] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt
	I0923 11:36:19.177687 2903914 certs.go:385] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key
	I0923 11:36:19.177743 2903914 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key
	I0923 11:36:19.177760 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt with IP's: []
	I0923 11:36:19.407564 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt ...
	I0923 11:36:19.407592 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt: {Name:mk60cabd91332996a9c3d4f42fab2e735667c2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408332 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key ...
	I0923 11:36:19.408353 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key: {Name:mk59bc62a8c0559971fc4c2dcb7a472d97d949c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408565 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:36:19.408610 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:36:19.408642 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:36:19.408675 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 11:36:19.409266 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:36:19.438812 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:36:19.463399 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:36:19.487765 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:36:19.512554 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:36:19.537552 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:36:19.562531 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:36:19.587389 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:36:19.612515 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:36:19.638755 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:36:19.657011 2903914 ssh_runner.go:195] Run: openssl version
	I0923 11:36:19.662616 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:36:19.672363 2903914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675882 2903914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675950 2903914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.682948 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:36:19.692316 2903914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:36:19.695950 2903914 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:36:19.695999 2903914 kubeadm.go:392] StartCluster: {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:19.696080 2903914 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 11:36:19.696143 2903914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:36:19.737759 2903914 cri.go:89] found id: ""
	I0923 11:36:19.737855 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:36:19.746943 2903914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:36:19.756075 2903914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 11:36:19.756179 2903914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:36:19.767199 2903914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:36:19.767221 2903914 kubeadm.go:157] found existing configuration files:
	
	I0923 11:36:19.767362 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:36:19.776658 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:36:19.776748 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:36:19.785348 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:36:19.794057 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:36:19.794157 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:36:19.803152 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.812476 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:36:19.812574 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.821847 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:36:19.830557 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:36:19.830648 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:36:19.839343 2903914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 11:36:19.882483 2903914 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:36:19.882775 2903914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:36:19.901384 2903914 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 11:36:19.901507 2903914 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 11:36:19.901565 2903914 kubeadm.go:310] OS: Linux
	I0923 11:36:19.901637 2903914 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 11:36:19.901714 2903914 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 11:36:19.901776 2903914 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 11:36:19.901867 2903914 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 11:36:19.901970 2903914 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 11:36:19.902052 2903914 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 11:36:19.902117 2903914 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 11:36:19.902212 2903914 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 11:36:19.902294 2903914 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 11:36:19.967219 2903914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:36:19.967380 2903914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:36:19.967473 2903914 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:36:19.973209 2903914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:36:19.975734 2903914 out.go:235]   - Generating certificates and keys ...
	I0923 11:36:19.975831 2903914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:36:19.975904 2903914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:36:20.179743 2903914 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:36:21.047813 2903914 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:36:21.298950 2903914 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:36:22.135071 2903914 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:36:22.608576 2903914 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:36:22.609029 2903914 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:22.891026 2903914 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:36:22.891409 2903914 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:23.205606 2903914 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:36:23.489426 2903914 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:36:23.714238 2903914 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:36:23.714637 2903914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:36:23.917484 2903914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:36:24.438330 2903914 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:36:24.759712 2903914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:36:25.033943 2903914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:36:25.695483 2903914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:36:25.696139 2903914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:36:25.699152 2903914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:36:25.701227 2903914 out.go:235]   - Booting up control plane ...
	I0923 11:36:25.701329 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:36:25.702980 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:36:25.704104 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:36:25.714874 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:36:25.721041 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:36:25.721278 2903914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:36:25.822291 2903914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:36:25.822414 2903914 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:36:26.325513 2903914 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.540451ms
	I0923 11:36:26.325611 2903914 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:36:32.327784 2903914 kubeadm.go:310] [api-check] The API server is healthy after 6.002254671s
	I0923 11:36:32.349741 2903914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:36:32.363473 2903914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:36:32.390246 2903914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:36:32.390456 2903914 kubeadm.go:310] [mark-control-plane] Marking the node addons-348379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:36:32.404726 2903914 kubeadm.go:310] [bootstrap-token] Using token: 9jvvlf.nkkd2cu2r67rq0id
	I0923 11:36:32.407710 2903914 out.go:235]   - Configuring RBAC rules ...
	I0923 11:36:32.407927 2903914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:36:32.415924 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:36:32.424675 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:36:32.430654 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:36:32.435150 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:36:32.439236 2903914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:36:32.750475 2903914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:36:33.169716 2903914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:36:33.734975 2903914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:36:33.736330 2903914 kubeadm.go:310] 
	I0923 11:36:33.736408 2903914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:36:33.736419 2903914 kubeadm.go:310] 
	I0923 11:36:33.736495 2903914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:36:33.736505 2903914 kubeadm.go:310] 
	I0923 11:36:33.736531 2903914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:36:33.736593 2903914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:36:33.736647 2903914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:36:33.736656 2903914 kubeadm.go:310] 
	I0923 11:36:33.736710 2903914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:36:33.736719 2903914 kubeadm.go:310] 
	I0923 11:36:33.736766 2903914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:36:33.736774 2903914 kubeadm.go:310] 
	I0923 11:36:33.736827 2903914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:36:33.736907 2903914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:36:33.736980 2903914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:36:33.736989 2903914 kubeadm.go:310] 
	I0923 11:36:33.737074 2903914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:36:33.737166 2903914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:36:33.737175 2903914 kubeadm.go:310] 
	I0923 11:36:33.737258 2903914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737363 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 \
	I0923 11:36:33.737389 2903914 kubeadm.go:310] 	--control-plane 
	I0923 11:36:33.737397 2903914 kubeadm.go:310] 
	I0923 11:36:33.737482 2903914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:36:33.737497 2903914 kubeadm.go:310] 
	I0923 11:36:33.737577 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737677 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 
	I0923 11:36:33.741014 2903914 kubeadm.go:310] W0923 11:36:19.879102    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741352 2903914 kubeadm.go:310] W0923 11:36:19.880019    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741583 2903914 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 11:36:33.741697 2903914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:36:33.741739 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:33.741754 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:33.744827 2903914 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 11:36:33.747515 2903914 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 11:36:33.751591 2903914 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 11:36:33.751612 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 11:36:33.770489 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 11:36:34.059421 2903914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:36:34.059507 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.059553 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-348379 minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=addons-348379 minikube.k8s.io/primary=true
	I0923 11:36:34.223427 2903914 ops.go:34] apiserver oom_adj: -16
	I0923 11:36:34.223613 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.724103 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.223668 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.724171 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.224392 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.724281 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.223628 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.724422 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.224601 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.335390 2903914 kubeadm.go:1113] duration metric: took 4.275959417s to wait for elevateKubeSystemPrivileges
	I0923 11:36:38.335426 2903914 kubeadm.go:394] duration metric: took 18.639429725s to StartCluster
	I0923 11:36:38.335446 2903914 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336106 2903914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:36:38.336533 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336743 2903914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:38.336888 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:36:38.337142 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.337173 2903914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:36:38.337257 2903914 addons.go:69] Setting yakd=true in profile "addons-348379"
	I0923 11:36:38.337273 2903914 addons.go:234] Setting addon yakd=true in "addons-348379"
	I0923 11:36:38.337299 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.337814 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338086 2903914 addons.go:69] Setting inspektor-gadget=true in profile "addons-348379"
	I0923 11:36:38.338109 2903914 addons.go:234] Setting addon inspektor-gadget=true in "addons-348379"
	I0923 11:36:38.338133 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338208 2903914 addons.go:69] Setting metrics-server=true in profile "addons-348379"
	I0923 11:36:38.338232 2903914 addons.go:234] Setting addon metrics-server=true in "addons-348379"
	I0923 11:36:38.338262 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338580 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338757 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.339010 2903914 addons.go:69] Setting cloud-spanner=true in profile "addons-348379"
	I0923 11:36:38.339029 2903914 addons.go:234] Setting addon cloud-spanner=true in "addons-348379"
	I0923 11:36:38.339055 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.339531 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.343469 2903914 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-348379"
	I0923 11:36:38.343503 2903914 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-348379"
	I0923 11:36:38.345984 2903914 out.go:177] * Verifying Kubernetes components...
	I0923 11:36:38.346027 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.346500 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.348537 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:38.345645 2903914 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348379"
	I0923 11:36:38.365526 2903914 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:38.365567 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.366036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345659 2903914 addons.go:69] Setting default-storageclass=true in profile "addons-348379"
	I0923 11:36:38.373074 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348379"
	I0923 11:36:38.373503 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345667 2903914 addons.go:69] Setting gcp-auth=true in profile "addons-348379"
	I0923 11:36:38.384131 2903914 mustload.go:65] Loading cluster: addons-348379
	I0923 11:36:38.384331 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.384582 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345675 2903914 addons.go:69] Setting ingress=true in profile "addons-348379"
	I0923 11:36:38.394024 2903914 addons.go:234] Setting addon ingress=true in "addons-348379"
	I0923 11:36:38.394117 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.345679 2903914 addons.go:69] Setting ingress-dns=true in profile "addons-348379"
	I0923 11:36:38.401504 2903914 addons.go:234] Setting addon ingress-dns=true in "addons-348379"
	I0923 11:36:38.404456 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.405051 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345945 2903914 addons.go:69] Setting registry=true in profile "addons-348379"
	I0923 11:36:38.410134 2903914 addons.go:234] Setting addon registry=true in "addons-348379"
	I0923 11:36:38.410210 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.411036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345953 2903914 addons.go:69] Setting storage-provisioner=true in profile "addons-348379"
	I0923 11:36:38.345956 2903914 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-348379"
	I0923 11:36:38.345960 2903914 addons.go:69] Setting volcano=true in profile "addons-348379"
	I0923 11:36:38.345964 2903914 addons.go:69] Setting volumesnapshots=true in profile "addons-348379"
	I0923 11:36:38.411256 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.454084 2903914 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:36:38.460897 2903914 addons.go:234] Setting addon storage-provisioner=true in "addons-348379"
	I0923 11:36:38.460953 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.461441 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.478144 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:36:38.478222 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:36:38.478322 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.487269 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-348379"
	I0923 11:36:38.487639 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.517904 2903914 addons.go:234] Setting addon volcano=true in "addons-348379"
	I0923 11:36:38.517966 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.518467 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.546347 2903914 addons.go:234] Setting addon volumesnapshots=true in "addons-348379"
	I0923 11:36:38.546411 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.546987 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.575673 2903914 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:36:38.575914 2903914 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:36:38.579125 2903914 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:38.579150 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:36:38.579221 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.579592 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:36:38.579639 2903914 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:36:38.579689 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.617860 2903914 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:36:38.622150 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:36:38.622177 2903914 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:36:38.622251 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.622735 2903914 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:36:38.624431 2903914 addons.go:234] Setting addon default-storageclass=true in "addons-348379"
	I0923 11:36:38.624466 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.628278 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.632340 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.636671 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:36:38.637544 2903914 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:38.637561 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:36:38.637622 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.655509 2903914 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:36:38.662366 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:36:38.665082 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:36:38.665107 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:36:38.665177 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.670015 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:36:38.676295 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:38.676385 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:36:38.676464 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.683562 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:36:38.686396 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:38.686419 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:36:38.686489 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.712304 2903914 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-348379"
	I0923 11:36:38.712346 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.712766 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.727410 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:36:38.730115 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:36:38.735471 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 11:36:38.735719 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.757102 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.760000 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:36:38.766393 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:36:38.766549 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:36:38.769513 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:36:38.771099 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.774083 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:36:38.774459 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:38.774507 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:36:38.774615 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.780134 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 11:36:38.788013 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:36:38.790672 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:36:38.794853 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:36:38.794879 2903914 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:36:38.794943 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.799096 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:36:38.799171 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:36:38.799267 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.815113 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 11:36:38.821525 2903914 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:38.821606 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 11:36:38.822703 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.827569 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850382 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850811 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.867021 2903914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:38.867042 2903914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:36:38.867113 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.883391 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.898738 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.922806 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.939666 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.947337 2903914 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:36:38.950063 2903914 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:36:38.952600 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:38.952625 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:36:38.952695 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.965723 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.981816 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.987132 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.991897 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	W0923 11:36:39.007637 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007696 2903914 retry.go:31] will retry after 239.86918ms: ssh: handshake failed: EOF
	W0923 11:36:39.007745 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007761 2903914 retry.go:31] will retry after 150.66552ms: ssh: handshake failed: EOF
	I0923 11:36:39.023482 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.023945 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.169972 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:39.170259 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:36:39.494971 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:39.556492 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:39.608428 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:39.623138 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:36:39.623206 2903914 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:36:39.709728 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:36:39.709758 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:36:39.777118 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:39.827433 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:36:39.827460 2903914 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:36:39.863119 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:39.870165 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:36:39.870234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:36:39.883044 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:39.890345 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:39.896655 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:36:39.896731 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:36:39.967387 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:36:39.967475 2903914 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:36:40.018137 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:36:40.018231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:36:40.030206 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:36:40.030287 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:36:40.050639 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:40.058613 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.058708 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:36:40.150893 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:36:40.151005 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:36:40.242616 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:36:40.242710 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:36:40.287048 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:36:40.287084 2903914 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:36:40.486475 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:36:40.486504 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:36:40.493831 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.494920 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:36:40.494943 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:36:40.557986 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.558018 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:36:40.619083 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:40.619108 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:36:40.657629 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:36:40.657660 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:36:40.734595 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:36:40.734624 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:36:40.801552 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:36:40.801595 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:36:40.865890 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.930189 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:36:40.930231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:36:40.951956 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:36:40.951998 2903914 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:36:40.961846 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:36:40.961876 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:36:40.972100 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:41.216224 2903914 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.216250 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:36:41.260577 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:36:41.260621 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:36:41.278445 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.294750 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:36:41.294791 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:36:41.323456 2903914 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.153148456s)
	I0923 11:36:41.323525 2903914 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 11:36:41.323500 2903914 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.153503843s)
	I0923 11:36:41.324480 2903914 node_ready.go:35] waiting up to 6m0s for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330122 2903914 node_ready.go:49] node "addons-348379" has status "Ready":"True"
	I0923 11:36:41.330152 2903914 node_ready.go:38] duration metric: took 5.643428ms for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330163 2903914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:36:41.339728 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:41.543987 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:36:41.544058 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:36:41.648713 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.648741 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:36:41.746379 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:36:41.746407 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:36:41.828842 2903914 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-348379" context rescaled to 1 replicas
	I0923 11:36:41.902914 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.921919 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.42690557s)
	I0923 11:36:41.921979 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.365461937s)
	I0923 11:36:41.922006 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.313555468s)
	I0923 11:36:42.029240 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:36:42.029272 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:36:42.347806 2903914 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347883 2903914 pod_ready.go:82] duration metric: took 1.008114432s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	E0923 11:36:42.347920 2903914 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347968 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:42.549396 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:36:42.549431 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:36:42.830524 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:42.830560 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:36:43.211480 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:44.381316 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:44.884644 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.107485626s)
	I0923 11:36:45.843349 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:36:45.843447 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:45.872465 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.470866 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:36:46.702197 2903914 addons.go:234] Setting addon gcp-auth=true in "addons-348379"
	I0923 11:36:46.702270 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:46.702828 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:46.733217 2903914 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:36:46.733280 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:46.775337 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.854788 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:47.501824 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.638626321s)
	I0923 11:36:47.501900 2903914 addons.go:475] Verifying addon ingress=true in "addons-348379"
	I0923 11:36:47.504030 2903914 out.go:177] * Verifying ingress addon...
	I0923 11:36:47.506684 2903914 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:36:47.511450 2903914 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:36:47.511576 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.013380 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.544278 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.902915 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:49.026130 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.515743 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.786183 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.903059538s)
	I0923 11:36:49.786253 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.89583004s)
	I0923 11:36:49.786324 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.735598298s)
	I0923 11:36:49.786361 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.292505813s)
	I0923 11:36:49.786376 2903914 addons.go:475] Verifying addon registry=true in "addons-348379"
	I0923 11:36:49.786562 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.920643746s)
	I0923 11:36:49.786579 2903914 addons.go:475] Verifying addon metrics-server=true in "addons-348379"
	I0923 11:36:49.786620 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.814490533s)
	I0923 11:36:49.786930 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.508442064s)
	W0923 11:36:49.786964 2903914 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.786984 2903914 retry.go:31] will retry after 231.122068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.787069 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.884124931s)
	I0923 11:36:49.788510 2903914 out.go:177] * Verifying registry addon...
	I0923 11:36:49.789825 2903914 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-348379 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:36:49.793761 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:36:49.857501 2903914 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:36:49.857529 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.019003 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:50.091953 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.88041396s)
	I0923 11:36:50.091996 2903914 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:50.092188 2903914 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.35893014s)
	I0923 11:36:50.094620 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:50.094707 2903914 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:36:50.096428 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.099435 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:36:50.101290 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:36:50.102965 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:36:50.103035 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:36:50.192143 2903914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:36:50.192217 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.253203 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:36:50.253267 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:36:50.330971 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.356163 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.356234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:36:50.430758 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.512255 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.604722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.797707 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.023077 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.104450 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.297470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.356586 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:51.511733 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.617197 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.799992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.872269 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853212545s)
	I0923 11:36:51.872449 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441621639s)
	I0923 11:36:51.875653 2903914 addons.go:475] Verifying addon gcp-auth=true in "addons-348379"
	I0923 11:36:51.880701 2903914 out.go:177] * Verifying gcp-auth addon...
	I0923 11:36:51.886238 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:36:51.899994 2903914 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:36:52.012221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.113517 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.299559 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.512212 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.605423 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.799422 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.854916 2903914 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.854943 2903914 pod_ready.go:82] duration metric: took 10.506940522s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.854956 2903914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861027 2903914 pod_ready.go:93] pod "etcd-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.861059 2903914 pod_ready.go:82] duration metric: took 6.063045ms for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861112 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867900 2903914 pod_ready.go:93] pod "kube-apiserver-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.867934 2903914 pod_ready.go:82] duration metric: took 6.806328ms for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867947 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875428 2903914 pod_ready.go:93] pod "kube-controller-manager-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.875465 2903914 pod_ready.go:82] duration metric: took 7.477644ms for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875477 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881487 2903914 pod_ready.go:93] pod "kube-proxy-nqbmm" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.881527 2903914 pod_ready.go:82] duration metric: took 6.024203ms for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881558 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.013082 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.115359 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.261367 2903914 pod_ready.go:93] pod "kube-scheduler-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:53.261440 2903914 pod_ready.go:82] duration metric: took 379.865643ms for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.261468 2903914 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.298690 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:53.511926 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.605400 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.806273 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.013143 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.105281 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.297943 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.511688 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.604819 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.801146 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.012597 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.104977 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.267783 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:55.297710 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.513110 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.604704 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.797620 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.013461 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.105679 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.298722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.511968 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.607325 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.804470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.011592 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.104763 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.298808 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.511851 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.612444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.768189 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:57.797413 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.012279 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.104746 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.298054 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.511885 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.604675 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.801111 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.014365 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.115071 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.299589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.511922 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.604297 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.768227 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:59.798277 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.038178 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.105635 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.333590 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.512357 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.604689 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.799851 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.011394 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.105032 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.310842 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.511596 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.605046 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.768750 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:01.798428 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.013963 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.113737 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.297934 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.511024 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.605393 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.798053 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.012529 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.104923 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.298010 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.512557 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.604282 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.768828 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:03.798455 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.015940 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.104743 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.299059 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.512309 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.606024 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.798339 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.019138 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.104829 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.298375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.511973 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.604665 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.767100 2903914 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:05.767130 2903914 pod_ready.go:82] duration metric: took 12.505640545s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:05.767142 2903914 pod_ready.go:39] duration metric: took 24.436967089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:05.767156 2903914 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:05.767223 2903914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:05.781985 2903914 api_server.go:72] duration metric: took 27.445202984s to wait for apiserver process to appear ...
	I0923 11:37:05.782061 2903914 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:05.782092 2903914 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 11:37:05.789746 2903914 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 11:37:05.790754 2903914 api_server.go:141] control plane version: v1.31.1
	I0923 11:37:05.790781 2903914 api_server.go:131] duration metric: took 8.705461ms to wait for apiserver health ...
	I0923 11:37:05.790793 2903914 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:05.800773 2903914 system_pods.go:59] 18 kube-system pods found
	I0923 11:37:05.800813 2903914 system_pods.go:61] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.800824 2903914 system_pods.go:61] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.800829 2903914 system_pods.go:61] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.800839 2903914 system_pods.go:61] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.800844 2903914 system_pods.go:61] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.800848 2903914 system_pods.go:61] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.800852 2903914 system_pods.go:61] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.800856 2903914 system_pods.go:61] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.800860 2903914 system_pods.go:61] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.800866 2903914 system_pods.go:61] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.800870 2903914 system_pods.go:61] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.800875 2903914 system_pods.go:61] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.800884 2903914 system_pods.go:61] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.800892 2903914 system_pods.go:61] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.800908 2903914 system_pods.go:61] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.800916 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800924 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800931 2903914 system_pods.go:61] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.800938 2903914 system_pods.go:74] duration metric: took 10.139277ms to wait for pod list to return data ...
	I0923 11:37:05.800948 2903914 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:37:05.802797 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.803949 2903914 default_sa.go:45] found service account: "default"
	I0923 11:37:05.803977 2903914 default_sa.go:55] duration metric: took 3.018472ms for default service account to be created ...
	I0923 11:37:05.803986 2903914 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:37:05.813725 2903914 system_pods.go:86] 18 kube-system pods found
	I0923 11:37:05.813761 2903914 system_pods.go:89] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.813771 2903914 system_pods.go:89] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.813777 2903914 system_pods.go:89] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.813785 2903914 system_pods.go:89] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.813789 2903914 system_pods.go:89] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.813793 2903914 system_pods.go:89] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.813798 2903914 system_pods.go:89] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.813810 2903914 system_pods.go:89] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.813815 2903914 system_pods.go:89] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.813824 2903914 system_pods.go:89] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.813828 2903914 system_pods.go:89] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.813835 2903914 system_pods.go:89] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.813845 2903914 system_pods.go:89] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.813851 2903914 system_pods.go:89] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.813859 2903914 system_pods.go:89] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.813866 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813877 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813881 2903914 system_pods.go:89] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.813889 2903914 system_pods.go:126] duration metric: took 9.896579ms to wait for k8s-apps to be running ...
	I0923 11:37:05.813904 2903914 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:37:05.813964 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:37:05.826195 2903914 system_svc.go:56] duration metric: took 12.281298ms WaitForService to wait for kubelet
	I0923 11:37:05.826224 2903914 kubeadm.go:582] duration metric: took 27.489446441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:37:05.826246 2903914 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:05.829405 2903914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 11:37:05.829438 2903914 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:05.829451 2903914 node_conditions.go:105] duration metric: took 3.199247ms to run NodePressure ...
	I0923 11:37:05.829481 2903914 start.go:241] waiting for startup goroutines ...
	I0923 11:37:06.016364 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.105089 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.297935 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:06.512579 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.605014 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.797841 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.011801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.104294 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.298088 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.511664 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.604893 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.801940 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.013764 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.105345 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.299766 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.512208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.605989 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.797920 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.013457 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.113285 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.298479 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.519639 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.621187 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.798092 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.023070 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.120870 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.299336 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.511998 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.604502 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.798656 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.011605 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.104357 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.297872 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.510891 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.605182 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.798112 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.016742 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.106392 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.302890 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.514134 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.606249 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.802344 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.012829 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.106630 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.297952 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.520400 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.609375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.799034 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.014344 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.118058 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.298788 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.510806 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.631742 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.797720 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.019826 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.106226 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.298830 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.511804 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.605411 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.798246 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.012028 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.104868 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.297740 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.524665 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.625850 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.797603 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.011338 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.103818 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.298307 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.512577 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.605193 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.798142 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.013543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.105035 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.297589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.512084 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.605341 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.798244 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.012543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.113664 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.298444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.511599 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.606804 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.798859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.017663 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.106755 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.297564 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.512160 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.604958 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.797812 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.013732 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.105616 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.298921 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.510795 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.604820 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.797800 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.012174 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.114441 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.298237 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.513158 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.625275 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.799161 2903914 kapi.go:107] duration metric: took 33.005400732s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:37:23.013096 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.105126 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:23.511910 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.613223 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.014075 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.105028 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.511000 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.605248 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.014495 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.111460 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.512685 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.612274 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.013492 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.105721 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.511127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.605859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.012240 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.105210 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.511957 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.604964 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.012182 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.104813 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.511895 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.605303 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.013053 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.104426 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.511346 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.603992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.014157 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.118062 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.511690 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.604969 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.014750 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.105518 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.511111 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.605002 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.012362 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.104960 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.511542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.604045 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.013753 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.104350 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.512293 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.614483 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:34.016127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:34.104734 2903914 kapi.go:107] duration metric: took 44.005304527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:37:34.510755 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.019903 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.510951 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.014603 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.510966 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.016710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.511532 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.013339 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.511853 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.012335 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.511569 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.013791 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.511129 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.011102 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.510793 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.015629 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.512029 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.011711 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.510801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.011721 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.511165 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.039643 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.511176 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.011883 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.511761 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.011001 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.511048 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.012925 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.511632 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.010792 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.511167 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.018047 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.511473 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.013208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.511710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.011604 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.512230 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.011680 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.511429 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.012725 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.511977 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.013759 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.512155 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.012542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.512221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:57.011684 2903914 kapi.go:107] duration metric: took 1m9.505004156s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:42:51.886491 2903914 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 11:42:51.886551 2903914 kapi.go:107] duration metric: took 6m0.000318633s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 11:42:51.886644 2903914 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 11:42:51.888644 2903914 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0923 11:42:51.890515 2903914 addons.go:510] duration metric: took 6m13.553319939s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress]
	I0923 11:42:51.890575 2903914 start.go:246] waiting for cluster config update ...
	I0923 11:42:51.890600 2903914 start.go:255] writing updated cluster config ...
	I0923 11:42:51.890918 2903914 ssh_runner.go:195] Run: rm -f paused
	I0923 11:42:52.246444 2903914 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:42:52.248672 2903914 out.go:177] * Done! kubectl is now configured to use "addons-348379" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	b6876ad1ce80c       4f725bf50aaa5       About a minute ago   Exited              gadget                                   7                   1161771f95540       gadget-xl5qc
	052af9fc2b4c1       289a818c8d9c5       12 minutes ago       Running             controller                               0                   47eb6c64c7a7d       ingress-nginx-controller-bc57996ff-6wq45
	967992dcbaf35       ee6d597e62dc8       12 minutes ago       Running             csi-snapshotter                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c27f769c6bce0       642ded511e141       12 minutes ago       Running             csi-provisioner                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	0faa259c6b420       922312104da8a       12 minutes ago       Running             liveness-probe                           0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	994b2f0a53275       08f6b2990811a       12 minutes ago       Running             hostpath                                 0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	ccaa6826447e2       9a80d518f102c       12 minutes ago       Running             csi-attacher                             0                   3c219582bf398       csi-hostpath-attacher-0
	63bc3e7f0cc00       420193b27261a       12 minutes ago       Exited              patch                                    0                   6fd7a092c3487       ingress-nginx-admission-patch-bfw56
	c5db857da31e6       4d1e5c3e97420       12 minutes ago       Running             volume-snapshot-controller               0                   311300b7a364c       snapshot-controller-56fcc65765-d47ng
	d1ac50a6261dd       420193b27261a       12 minutes ago       Exited              create                                   0                   895d61d9c16ad       ingress-nginx-admission-create-fwt6v
	ac4d50ae15f93       4d1e5c3e97420       12 minutes ago       Running             volume-snapshot-controller               0                   698a149c7f280       snapshot-controller-56fcc65765-dchr7
	ed38cab12122b       0107d56dbc0be       12 minutes ago       Running             node-driver-registrar                    0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	6781c22ae4a4e       5548a49bb60ba       12 minutes ago       Running             metrics-server                           0                   ffa74d42d59a0       metrics-server-84c5f94fbc-dgpbq
	5e5a8e69b34fd       7ce2150c8929b       12 minutes ago       Running             local-path-provisioner                   0                   9bc0d2214e2a7       local-path-provisioner-86d989889c-h5pl9
	f32c490729dc3       be9cac3585579       12 minutes ago       Running             cloud-spanner-emulator                   0                   b7452ccbf189d       cloud-spanner-emulator-5b584cc74-lbht7
	7bd548d7da390       a9bac31a5be8d       12 minutes ago       Running             nvidia-device-plugin-ctr                 0                   5582827316900       nvidia-device-plugin-daemonset-xqqn9
	cecddb85ce0f0       487fa743e1e22       12 minutes ago       Running             csi-resizer                              0                   85d2097f2da83       csi-hostpath-resizer-0
	a6cc88c765de7       1461903ec4fe9       12 minutes ago       Running             csi-external-health-monitor-controller   0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c37000ef28652       35508c2f890c4       13 minutes ago       Running             minikube-ingress-dns                     0                   d881770814e2f       kube-ingress-dns-minikube
	497f8c41b274e       2f6c962e7b831       13 minutes ago       Running             coredns                                  0                   d2d48d489a636       coredns-7c65d6cfc9-ppz9h
	b05bd4b18e280       ba04bb24b9575       13 minutes ago       Running             storage-provisioner                      0                   e816c2d6b5461       storage-provisioner
	a7e55167b7b39       6a23fa8fd2b78       13 minutes ago       Running             kindnet-cni                              0                   9389ee1ac67f5       kindnet-4kcdh
	005547c4c4723       24a140c548c07       13 minutes ago       Running             kube-proxy                               0                   a4defdda67173       kube-proxy-nqbmm
	9255b7a6f4a59       7f8aa378bb47d       13 minutes ago       Running             kube-scheduler                           0                   cae78edec7e41       kube-scheduler-addons-348379
	92413a7a8d6f6       279f381cb3736       13 minutes ago       Running             kube-controller-manager                  0                   731c7b79dc027       kube-controller-manager-addons-348379
	20e5f68e09619       d3f53a98c0a9d       13 minutes ago       Running             kube-apiserver                           0                   bdcb66b095a48       kube-apiserver-addons-348379
	8abc59946512a       27e3830e14027       13 minutes ago       Running             etcd                                     0                   a8c7d184a28f5       etcd-addons-348379
	
	
	==> containerd <==
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.823016652Z" level=info msg="StopContainer for \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\" returns successfully"
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.825622442Z" level=info msg="StopPodSandbox for \"f3640fdf0d54f720045ad0f2f77c7a967961da17442b47a18d27f7e13bb0d92e\""
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.826240638Z" level=info msg="Container to stop \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.832977241Z" level=info msg="shim disconnected" id=ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.833228251Z" level=warning msg="cleaning up after shim disconnected" id=ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.833329001Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.872459145Z" level=info msg="StopContainer for \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\" returns successfully"
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.873224122Z" level=info msg="StopPodSandbox for \"dd955d0d91d1c6136207233a5986b50df8b6886bafca742ae7f6816d64c9f826\""
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.873376326Z" level=info msg="Container to stop \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.890840644Z" level=info msg="shim disconnected" id=f3640fdf0d54f720045ad0f2f77c7a967961da17442b47a18d27f7e13bb0d92e namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.893137356Z" level=warning msg="cleaning up after shim disconnected" id=f3640fdf0d54f720045ad0f2f77c7a967961da17442b47a18d27f7e13bb0d92e namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.893352715Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.959643307Z" level=info msg="shim disconnected" id=dd955d0d91d1c6136207233a5986b50df8b6886bafca742ae7f6816d64c9f826 namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.959717612Z" level=warning msg="cleaning up after shim disconnected" id=dd955d0d91d1c6136207233a5986b50df8b6886bafca742ae7f6816d64c9f826 namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.959727885Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.965945318Z" level=info msg="TearDown network for sandbox \"f3640fdf0d54f720045ad0f2f77c7a967961da17442b47a18d27f7e13bb0d92e\" successfully"
	Sep 23 11:49:55 addons-348379 containerd[816]: time="2024-09-23T11:49:55.965987328Z" level=info msg="StopPodSandbox for \"f3640fdf0d54f720045ad0f2f77c7a967961da17442b47a18d27f7e13bb0d92e\" returns successfully"
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.129007131Z" level=info msg="TearDown network for sandbox \"dd955d0d91d1c6136207233a5986b50df8b6886bafca742ae7f6816d64c9f826\" successfully"
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.129170257Z" level=info msg="StopPodSandbox for \"dd955d0d91d1c6136207233a5986b50df8b6886bafca742ae7f6816d64c9f826\" returns successfully"
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.683794361Z" level=info msg="RemoveContainer for \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\""
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.708554335Z" level=info msg="RemoveContainer for \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\" returns successfully"
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.733376512Z" level=error msg="ContainerStatus for \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\": not found"
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.778796696Z" level=info msg="RemoveContainer for \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\""
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.787829059Z" level=info msg="RemoveContainer for \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\" returns successfully"
	Sep 23 11:49:56 addons-348379 containerd[816]: time="2024-09-23T11:49:56.790136413Z" level=error msg="ContainerStatus for \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\": not found"
	
	
	==> coredns [497f8c41b274e14c72d9933f13fac36b6c4acf8def62c9f3205823623e58226d] <==
	[INFO] 10.244.0.8:44197 - 54267 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180151s
	[INFO] 10.244.0.8:41305 - 6638 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001704017s
	[INFO] 10.244.0.8:41305 - 35565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141999s
	[INFO] 10.244.0.8:40738 - 27223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080918s
	[INFO] 10.244.0.8:40738 - 10324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129353s
	[INFO] 10.244.0.8:56286 - 60697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080098s
	[INFO] 10.244.0.8:56286 - 24093 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043076s
	[INFO] 10.244.0.8:43089 - 14336 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004978s
	[INFO] 10.244.0.8:43089 - 63494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035692s
	[INFO] 10.244.0.8:57209 - 23569 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041272s
	[INFO] 10.244.0.8:57209 - 18191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041304s
	[INFO] 10.244.0.8:58538 - 8344 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004349043s
	[INFO] 10.244.0.8:58538 - 22686 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003876717s
	[INFO] 10.244.0.8:48601 - 29559 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050166s
	[INFO] 10.244.0.8:48601 - 628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054556s
	[INFO] 10.244.0.8:43467 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093161s
	[INFO] 10.244.0.8:43467 - 32245 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042002s
	[INFO] 10.244.0.8:54486 - 7532 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066798s
	[INFO] 10.244.0.8:54486 - 50287 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038416s
	[INFO] 10.244.0.8:33247 - 25963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052963s
	[INFO] 10.244.0.8:33247 - 9581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
	[INFO] 10.244.0.8:42814 - 61299 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001881616s
	[INFO] 10.244.0.8:42814 - 6802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001453827s
	[INFO] 10.244.0.8:59224 - 13301 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046384s
	[INFO] 10.244.0.8:59224 - 30455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035372s
	
	
	==> describe nodes <==
	Name:               addons-348379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-348379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=addons-348379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-348379
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-348379"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:36:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-348379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:49:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:48:17 +0000   Mon, 23 Sep 2024 11:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-348379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb92b9be866447b6af2ef85f12013e1b
	  System UUID:                84447a6a-e17a-42ff-ba46-fb82e93bc172
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-lbht7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gadget                      gadget-xl5qc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6wq45    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-ppz9h                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-zdwf8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-348379                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-4kcdh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-348379                250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-348379       200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-nqbmm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-348379                100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-dgpbq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         13m
	  kube-system                 nvidia-device-plugin-daemonset-xqqn9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-d47ng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-dchr7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-h5pl9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-admission-7f54bd7598-s85bg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-admission-init-f2bhm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-controllers-5ff7c5d4db-w658s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  volcano-system              volcano-scheduler-79dc4b78bb-2vx88          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node addons-348379 event: Registered Node addons-348379 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [8abc59946512ab5ab4d902de194545051c7e577bd5196212bf85326ca705cd43] <==
	{"level":"info","ts":"2024-09-23T11:36:27.038394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-23T11:36:27.038527Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-23T11:36:27.823331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.825628Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-348379 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:36:27.825777Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827732Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.831921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.856413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.847318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:36:27.847973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.857537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.856653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:28.587946Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1709}
	{"level":"info","ts":"2024-09-23T11:46:28.663638Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1709,"took":"75.159252ms","hash":39109330,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4308992,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T11:46:28.663685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":39109330,"revision":1709,"compact-revision":-1}
	
	
	==> kernel <==
	 11:49:57 up 1 day, 19:32,  0 users,  load average: 0.74, 0.58, 1.48
	Linux addons-348379 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a7e55167b7b39430011e4998f44d30eaec7edc15414dc232724161a09a27e599] <==
	I0923 11:47:50.321081       1 main.go:299] handling current node
	I0923 11:48:00.327253       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:00.327544       1 main.go:299] handling current node
	I0923 11:48:10.312132       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:10.312166       1 main.go:299] handling current node
	I0923 11:48:20.312412       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:20.312446       1 main.go:299] handling current node
	I0923 11:48:30.312298       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:30.312427       1 main.go:299] handling current node
	I0923 11:48:40.312125       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:40.312158       1 main.go:299] handling current node
	I0923 11:48:50.320905       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:48:50.320946       1 main.go:299] handling current node
	I0923 11:49:00.316075       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:49:00.316130       1 main.go:299] handling current node
	I0923 11:49:10.312126       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:49:10.312160       1 main.go:299] handling current node
	I0923 11:49:20.315442       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:49:20.315480       1 main.go:299] handling current node
	I0923 11:49:30.312715       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:49:30.312749       1 main.go:299] handling current node
	I0923 11:49:40.313075       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:49:40.313118       1 main.go:299] handling current node
	I0923 11:49:50.312143       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:49:50.312181       1 main.go:299] handling current node
	
	
	==> kube-apiserver [20e5f68e09619b9d622e831b0c429aae0e245dfb5c647d9e6fd9193c6cdfedac] <==
	E0923 11:47:54.783991       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:47:54.785615       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:47:54.908274       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:47:54.908316       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:47:54.909972       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:48:54.794230       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:48:54.794269       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:48:54.796097       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:48:54.916950       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:48:54.916993       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:48:54.918651       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:48:55.661371       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate-sa.k8s.io: failed calling webhook "gcp-auth-mutate-sa.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:48:55.661553       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate-sa.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate/sa?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:49:41.010292       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:49:41.010337       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:49:41.012626       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:49:54.805055       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:49:54.805107       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:49:54.806976       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:49:54.837264       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:49:54.837308       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:49:54.839080       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:49:54.924563       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:49:54.924604       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:49:54.926223       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	
	
	==> kube-controller-manager [92413a7a8d6f6d058e961a0759535062b02d7d50e5012e81614e6290ce3465b3] <==
	E0923 11:46:54.902541       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:46:54.903797       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:47:47.182800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.665714ms"
	E0923 11:47:47.182839       1 replica_set.go:560] "Unhandled Error" err="sync \"gcp-auth/gcp-auth-89d5ffd79\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.786207       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.787446       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.910691       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:47:54.912612       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:48:17.574828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-348379"
	I0923 11:48:20.132889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="65.493µs"
	I0923 11:48:20.146254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="49.452µs"
	I0923 11:48:33.131858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="68.085µs"
	I0923 11:48:35.131468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="44.004µs"
	I0923 11:48:38.129745       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 11:48:52.131254       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	E0923 11:48:54.796669       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:48:54.798498       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:48:54.919410       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:48:54.920772       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:49:49.184601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="5.974µs"
	E0923 11:49:54.807652       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:49:54.809074       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:49:54.926853       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:49:54.928113       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:49:55.602789       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.268µs"
	
	
	==> kube-proxy [005547c4c4723cd6ce4dac939ebab2b2d89e428b99ec971a179497842dcb5abe] <==
	I0923 11:36:39.711887       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:36:39.808570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:36:39.808647       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:36:39.869255       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:36:39.869320       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:36:39.872284       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:36:39.872851       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:36:39.872865       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:36:39.885041       1 config.go:199] "Starting service config controller"
	I0923 11:36:39.885075       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:36:39.885100       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:36:39.885105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:36:39.886870       1 config.go:328] "Starting node config controller"
	I0923 11:36:39.886882       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:36:39.986150       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:36:39.986235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:36:39.987256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9255b7a6f4a590493be04a5bbbdf14b0efddfe1f321b8a25d2eed1055c6741df] <==
	W0923 11:36:31.587742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:36:31.587843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:36:31.587943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:36:31.588052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:36:31.588221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.588380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:36:31.588540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:36:31.588655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:36:31.588819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:36:31.588921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:36:31.589185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:36:32.574074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:49:49 addons-348379 kubelet[1462]: I0923 11:49:49.464686    1462 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933d0719-bb66-4c27-9808-eb14855569f8-kube-api-access-t7vqg" (OuterVolumeSpecName: "kube-api-access-t7vqg") pod "933d0719-bb66-4c27-9808-eb14855569f8" (UID: "933d0719-bb66-4c27-9808-eb14855569f8"). InnerVolumeSpecName "kube-api-access-t7vqg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:49:49 addons-348379 kubelet[1462]: I0923 11:49:49.563367    1462 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t7vqg\" (UniqueName: \"kubernetes.io/projected/933d0719-bb66-4c27-9808-eb14855569f8-kube-api-access-t7vqg\") on node \"addons-348379\" DevicePath \"\""
	Sep 23 11:49:49 addons-348379 kubelet[1462]: I0923 11:49:49.626040    1462 scope.go:117] "RemoveContainer" containerID="429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f"
	Sep 23 11:49:49 addons-348379 kubelet[1462]: I0923 11:49:49.637309    1462 scope.go:117] "RemoveContainer" containerID="429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f"
	Sep 23 11:49:49 addons-348379 kubelet[1462]: E0923 11:49:49.638104    1462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f\": not found" containerID="429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f"
	Sep 23 11:49:49 addons-348379 kubelet[1462]: I0923 11:49:49.638147    1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f"} err="failed to get container status \"429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f\": rpc error: code = NotFound desc = an error occurred when try to find container \"429fb2006ad1686c446764bedc5e6271dd48ec046251db7ebe55e0a61bd7f19f\": not found"
	Sep 23 11:49:51 addons-348379 kubelet[1462]: I0923 11:49:51.120499    1462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933d0719-bb66-4c27-9808-eb14855569f8" path="/var/lib/kubelet/pods/933d0719-bb66-4c27-9808-eb14855569f8/volumes"
	Sep 23 11:49:52 addons-348379 kubelet[1462]: E0923 11:49:52.117652    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:49:55 addons-348379 kubelet[1462]: E0923 11:49:55.117344    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.030128    1462 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwxq7\" (UniqueName: \"kubernetes.io/projected/e86ab41a-1d3c-4fd0-8e39-126f3b789212-kube-api-access-bwxq7\") pod \"e86ab41a-1d3c-4fd0-8e39-126f3b789212\" (UID: \"e86ab41a-1d3c-4fd0-8e39-126f3b789212\") "
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.033314    1462 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86ab41a-1d3c-4fd0-8e39-126f3b789212-kube-api-access-bwxq7" (OuterVolumeSpecName: "kube-api-access-bwxq7") pod "e86ab41a-1d3c-4fd0-8e39-126f3b789212" (UID: "e86ab41a-1d3c-4fd0-8e39-126f3b789212"). InnerVolumeSpecName "kube-api-access-bwxq7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.131371    1462 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bwxq7\" (UniqueName: \"kubernetes.io/projected/e86ab41a-1d3c-4fd0-8e39-126f3b789212-kube-api-access-bwxq7\") on node \"addons-348379\" DevicePath \"\""
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.232053    1462 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlxm2\" (UniqueName: \"kubernetes.io/projected/172e9514-645d-4b65-8403-0862836b34c7-kube-api-access-zlxm2\") pod \"172e9514-645d-4b65-8403-0862836b34c7\" (UID: \"172e9514-645d-4b65-8403-0862836b34c7\") "
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.234171    1462 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172e9514-645d-4b65-8403-0862836b34c7-kube-api-access-zlxm2" (OuterVolumeSpecName: "kube-api-access-zlxm2") pod "172e9514-645d-4b65-8403-0862836b34c7" (UID: "172e9514-645d-4b65-8403-0862836b34c7"). InnerVolumeSpecName "kube-api-access-zlxm2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.332961    1462 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zlxm2\" (UniqueName: \"kubernetes.io/projected/172e9514-645d-4b65-8403-0862836b34c7-kube-api-access-zlxm2\") on node \"addons-348379\" DevicePath \"\""
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.657964    1462 scope.go:117] "RemoveContainer" containerID="ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.720150    1462 scope.go:117] "RemoveContainer" containerID="ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: E0923 11:49:56.771575    1462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\": not found" containerID="ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.771624    1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e"} err="failed to get container status \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba68702d656c252a35e65d48b371eb02721ef03ffe5d59847fe680de901bad2e\": not found"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.771650    1462 scope.go:117] "RemoveContainer" containerID="f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.788217    1462 scope.go:117] "RemoveContainer" containerID="f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: E0923 11:49:56.790351    1462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\": not found" containerID="f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a"
	Sep 23 11:49:56 addons-348379 kubelet[1462]: I0923 11:49:56.790386    1462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a"} err="failed to get container status \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f35fa36f73991213c5f1e74b64b03af50539129c8585baf76ef1006139578b0a\": not found"
	Sep 23 11:49:57 addons-348379 kubelet[1462]: I0923 11:49:57.120032    1462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172e9514-645d-4b65-8403-0862836b34c7" path="/var/lib/kubelet/pods/172e9514-645d-4b65-8403-0862836b34c7/volumes"
	Sep 23 11:49:57 addons-348379 kubelet[1462]: I0923 11:49:57.120438    1462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86ab41a-1d3c-4fd0-8e39-126f3b789212" path="/var/lib/kubelet/pods/e86ab41a-1d3c-4fd0-8e39-126f3b789212/volumes"
	
	
	==> storage-provisioner [b05bd4b18e2804c1706af6011e03e349f643f23d7f968ca74ffb0f2eaf78047d] <==
	I0923 11:36:43.550112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:36:43.591903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:36:43.592066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:36:43.604224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:36:43.604420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	I0923 11:36:43.605459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdf468f-763c-4a54-98e3-d90ea0e2e8e5", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348379_e926b365-1f0f-4822-899e-75d077991921 became leader
	I0923 11:36:43.707026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
helpers_test.go:261: (dbg) Run:  kubectl --context addons-348379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1 (93.633915ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fwt6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bfw56" not found
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-s85bg" not found
	Error from server (NotFound): pods "volcano-admission-init-f2bhm" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-w658s" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-2vx88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1
--- FAIL: TestAddons/parallel/Registry (14.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (483.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-348379 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-348379 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-348379 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:243: (dbg) Non-zero exit: kubectl --context addons-348379 replace --force -f testdata/nginx-pod-svc.yaml: exit status 1 (306.781665ms)

                                                
                                                
-- stdout --
	service/nginx replaced

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:245: failed to kubectl replace nginx-pod-svc. args "kubectl --context addons-348379 replace --force -f testdata/nginx-pod-svc.yaml". exit status 1
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:248: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
addons_test.go:248: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-09-23 12:04:27.258640839 +0000 UTC m=+1733.768688186
addons_test.go:249: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-348379
helpers_test.go:235: (dbg) docker inspect addons-348379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751",
	        "Created": "2024-09-23T11:36:14.086186806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2904413,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:36:14.226479889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hostname",
	        "HostsPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hosts",
	        "LogPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751-json.log",
	        "Name": "/addons-348379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-348379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-348379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-348379",
	                "Source": "/var/lib/docker/volumes/addons-348379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-348379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-348379",
	                "name.minikube.sigs.k8s.io": "addons-348379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0be556c33c7ef40a6e70f0f396b5b7933e0b3ad3ca535a0d80fe041626578e74",
	            "SandboxKey": "/var/run/docker/netns/0be556c33c7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-348379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d5b669f6f456e67aee85da3145f607fecadf2af36a162932dd5e9bc9ffffee31",
	                    "EndpointID": "406dde2b8517c79d36b643a1ab3c5c13c7554eb0ba705342617104596090b341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-348379",
	                        "1973e07b6a14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-348379 -n addons-348379
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 logs -n 25: (1.452496337s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-611017              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -o=json --download-only              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-423730              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | download-docker-021793               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-021793            | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | binary-mirror-046209                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34157               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-046209              | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| addons  | enable dashboard -p                  | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| start   | -p addons-348379 --wait=true         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| ip      | addons-348379 ip                     | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | -p addons-348379                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | -p addons-348379                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-348379 addons                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	|         | addons-348379                        |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:49.628900 2903914 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:49.629020 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629030 2903914 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:49.629036 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629290 2903914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 11:35:49.629765 2903914 out.go:352] Setting JSON to false
	I0923 11:35:49.630698 2903914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155897,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:49.630769 2903914 start.go:139] virtualization:  
	I0923 11:35:49.632856 2903914 out.go:177] * [addons-348379] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:35:49.634664 2903914 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:35:49.634735 2903914 notify.go:220] Checking for updates...
	I0923 11:35:49.637703 2903914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:49.639443 2903914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:49.640961 2903914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:49.642654 2903914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 11:35:49.644154 2903914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:35:49.646005 2903914 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:49.674816 2903914 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:49.674959 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.740942 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.731429543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.741058 2903914 docker.go:318] overlay module found
	I0923 11:35:49.742922 2903914 out.go:177] * Using the docker driver based on user configuration
	I0923 11:35:49.744404 2903914 start.go:297] selected driver: docker
	I0923 11:35:49.744427 2903914 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:49.744443 2903914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:35:49.745066 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.807015 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.798069613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.807243 2903914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:49.807490 2903914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:35:49.810585 2903914 out.go:177] * Using Docker driver with root privileges
	I0923 11:35:49.812100 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:35:49.812180 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:49.812195 2903914 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:49.812295 2903914 start.go:340] cluster config:
	{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:49.814088 2903914 out.go:177] * Starting "addons-348379" primary control-plane node in "addons-348379" cluster
	I0923 11:35:49.815629 2903914 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:49.817311 2903914 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:49.818975 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:49.819031 2903914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:49.819044 2903914 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:49.819072 2903914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:49.819129 2903914 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 11:35:49.819140 2903914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 11:35:49.819629 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:35:49.819663 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json: {Name:mk57bf6c9d1a024b95a9182333fb0e843fbdc049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:35:49.834226 2903914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:49.834349 2903914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:49.834370 2903914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:35:49.834376 2903914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:35:49.834383 2903914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:35:49.834388 2903914 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 11:36:07.477009 2903914 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 11:36:07.477055 2903914 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:36:07.477102 2903914 start.go:360] acquireMachinesLock for addons-348379: {Name:mk0afc734c4276635047574670b52ff1624a597d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:07.477241 2903914 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "addons-348379"
	I0923 11:36:07.477273 2903914 start.go:93] Provisioning new machine with config: &{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:07.477361 2903914 start.go:125] createHost starting for "" (driver="docker")
	I0923 11:36:07.479499 2903914 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 11:36:07.479768 2903914 start.go:159] libmachine.API.Create for "addons-348379" (driver="docker")
	I0923 11:36:07.479806 2903914 client.go:168] LocalClient.Create starting
	I0923 11:36:07.479934 2903914 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem
	I0923 11:36:07.656758 2903914 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem
	I0923 11:36:07.895005 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 11:36:07.911037 2903914 cli_runner.go:211] docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 11:36:07.911143 2903914 network_create.go:284] running [docker network inspect addons-348379] to gather additional debugging logs...
	I0923 11:36:07.911165 2903914 cli_runner.go:164] Run: docker network inspect addons-348379
	W0923 11:36:07.926743 2903914 cli_runner.go:211] docker network inspect addons-348379 returned with exit code 1
	I0923 11:36:07.926792 2903914 network_create.go:287] error running [docker network inspect addons-348379]: docker network inspect addons-348379: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-348379 not found
	I0923 11:36:07.926806 2903914 network_create.go:289] output of [docker network inspect addons-348379]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-348379 not found
	
	** /stderr **
	I0923 11:36:07.926904 2903914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:07.941390 2903914 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eb90}
	I0923 11:36:07.941437 2903914 network_create.go:124] attempt to create docker network addons-348379 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 11:36:07.941499 2903914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348379 addons-348379
	I0923 11:36:08.013253 2903914 network_create.go:108] docker network addons-348379 192.168.49.0/24 created
	I0923 11:36:08.013292 2903914 kic.go:121] calculated static IP "192.168.49.2" for the "addons-348379" container
	I0923 11:36:08.013374 2903914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 11:36:08.030080 2903914 cli_runner.go:164] Run: docker volume create addons-348379 --label name.minikube.sigs.k8s.io=addons-348379 --label created_by.minikube.sigs.k8s.io=true
	I0923 11:36:08.048023 2903914 oci.go:103] Successfully created a docker volume addons-348379
	I0923 11:36:08.048128 2903914 cli_runner.go:164] Run: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 11:36:10.056258 2903914 cli_runner.go:217] Completed: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.008084592s)
	I0923 11:36:10.056295 2903914 oci.go:107] Successfully prepared a docker volume addons-348379
	I0923 11:36:10.056323 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:10.056345 2903914 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 11:36:10.056440 2903914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 11:36:14.019670 2903914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.963182399s)
	I0923 11:36:14.019706 2903914 kic.go:203] duration metric: took 3.963357873s to extract preloaded images to volume ...
	W0923 11:36:14.019879 2903914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 11:36:14.020008 2903914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 11:36:14.071499 2903914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348379 --name addons-348379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348379 --network addons-348379 --ip 192.168.49.2 --volume addons-348379:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 11:36:14.406562 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Running}}
	I0923 11:36:14.430276 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.456577 2903914 cli_runner.go:164] Run: docker exec addons-348379 stat /var/lib/dpkg/alternatives/iptables
	I0923 11:36:14.524343 2903914 oci.go:144] the created container "addons-348379" has a running status.
	I0923 11:36:14.524373 2903914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa...
	I0923 11:36:14.817157 2903914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 11:36:14.839902 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.872963 2903914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 11:36:14.872990 2903914 kic_runner.go:114] Args: [docker exec --privileged addons-348379 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 11:36:14.951457 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.976890 2903914 machine.go:93] provisionDockerMachine start ...
	I0923 11:36:14.977005 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.007610 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.007901 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.007913 2903914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:36:15.203560 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.203583 2903914 ubuntu.go:169] provisioning hostname "addons-348379"
	I0923 11:36:15.203659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.229660 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.229941 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.229961 2903914 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-348379 && echo "addons-348379" | sudo tee /etc/hostname
	I0923 11:36:15.387302 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.387387 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.409587 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.409829 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.409846 2903914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-348379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348379/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-348379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:36:15.552128 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:36:15.552218 2903914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 11:36:15.552276 2903914 ubuntu.go:177] setting up certificates
	I0923 11:36:15.552305 2903914 provision.go:84] configureAuth start
	I0923 11:36:15.552432 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:15.571035 2903914 provision.go:143] copyHostCerts
	I0923 11:36:15.571118 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 11:36:15.571374 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 11:36:15.571463 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 11:36:15.571520 2903914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.addons-348379 san=[127.0.0.1 192.168.49.2 addons-348379 localhost minikube]
	I0923 11:36:15.936111 2903914 provision.go:177] copyRemoteCerts
	I0923 11:36:15.936188 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:36:15.936230 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.954080 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.048521 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:36:16.073032 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:36:16.096585 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:36:16.120809 2903914 provision.go:87] duration metric: took 568.476502ms to configureAuth
	I0923 11:36:16.120878 2903914 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:36:16.121066 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:16.121075 2903914 machine.go:96] duration metric: took 1.144158288s to provisionDockerMachine
	I0923 11:36:16.121082 2903914 client.go:171] duration metric: took 8.641266117s to LocalClient.Create
	I0923 11:36:16.121105 2903914 start.go:167] duration metric: took 8.641338888s to libmachine.API.Create "addons-348379"
	I0923 11:36:16.121117 2903914 start.go:293] postStartSetup for "addons-348379" (driver="docker")
	I0923 11:36:16.121127 2903914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:36:16.121180 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:36:16.121219 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.140164 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.237274 2903914 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:36:16.240688 2903914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:36:16.240726 2903914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:36:16.240751 2903914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:36:16.240759 2903914 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:36:16.240772 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 11:36:16.240845 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 11:36:16.240872 2903914 start.go:296] duration metric: took 119.748923ms for postStartSetup
	I0923 11:36:16.241197 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.257321 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:36:16.257608 2903914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:36:16.257659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.273475 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.364420 2903914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:36:16.369417 2903914 start.go:128] duration metric: took 8.892040374s to createHost
	I0923 11:36:16.369444 2903914 start.go:83] releasing machines lock for "addons-348379", held for 8.892189913s
	I0923 11:36:16.369525 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.386496 2903914 ssh_runner.go:195] Run: cat /version.json
	I0923 11:36:16.386558 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.386844 2903914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:36:16.386924 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.402767 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.407504 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.620856 2903914 ssh_runner.go:195] Run: systemctl --version
	I0923 11:36:16.625351 2903914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:36:16.629494 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 11:36:16.656196 2903914 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 11:36:16.656273 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:36:16.685634 2903914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 11:36:16.685658 2903914 start.go:495] detecting cgroup driver to use...
	I0923 11:36:16.685694 2903914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:36:16.685752 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:36:16.698438 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:36:16.709958 2903914 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:36:16.710048 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:36:16.723912 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:36:16.738695 2903914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:36:16.833978 2903914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:36:16.926860 2903914 docker.go:233] disabling docker service ...
	I0923 11:36:16.926964 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:36:16.947106 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:36:16.959548 2903914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:36:17.053558 2903914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:36:17.135034 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:36:17.146613 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:36:17.163902 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:36:17.174292 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:36:17.185041 2903914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:36:17.185112 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:36:17.195986 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.206180 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:36:17.217391 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.228075 2903914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:36:17.237476 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:36:17.247362 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:36:17.257646 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:36:17.267821 2903914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:36:17.276866 2903914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:36:17.286512 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.359977 2903914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:36:17.486930 2903914 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 11:36:17.487093 2903914 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 11:36:17.490604 2903914 start.go:563] Will wait 60s for crictl version
	I0923 11:36:17.490709 2903914 ssh_runner.go:195] Run: which crictl
	I0923 11:36:17.494017 2903914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:36:17.529914 2903914 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 11:36:17.530001 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.553062 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.581066 2903914 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 11:36:17.583092 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:17.598970 2903914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 11:36:17.602709 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.616517 2903914 kubeadm.go:883] updating cluster {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:36:17.616637 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:17.616705 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.653561 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.653588 2903914 containerd.go:534] Images already preloaded, skipping extraction
	I0923 11:36:17.653654 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.689043 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.689069 2903914 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:36:17.689077 2903914 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 11:36:17.689170 2903914 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-348379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:36:17.689240 2903914 ssh_runner.go:195] Run: sudo crictl info
	I0923 11:36:17.725180 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:17.725207 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:17.725219 2903914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:36:17.725244 2903914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348379 NodeName:addons-348379 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:36:17.725401 2903914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-348379"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:36:17.725481 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:36:17.734617 2903914 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:36:17.734696 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:36:17.743298 2903914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 11:36:17.761798 2903914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:36:17.779190 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 11:36:17.797282 2903914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:36:17.801431 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.813470 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.904297 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:17.918201 2903914 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379 for IP: 192.168.49.2
	I0923 11:36:17.918278 2903914 certs.go:194] generating shared ca certs ...
	I0923 11:36:17.918311 2903914 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:17.918478 2903914 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 11:36:18.402482 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt ...
	I0923 11:36:18.402521 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt: {Name:mka24ad8ce2563bd38493ad3048e3b202e9928cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.403346 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key ...
	I0923 11:36:18.403367 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key: {Name:mke301cd867e18ebea9d875f8c02fb489d6a0a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.404035 2903914 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 11:36:18.591619 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt ...
	I0923 11:36:18.591652 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt: {Name:mkf29e9cf8d545d0d33d0ce8b9548c24a316f1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.591849 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key ...
	I0923 11:36:18.591862 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key: {Name:mk38a5ba119b442a98d5a1991cd20b7dc11fb378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.592490 2903914 certs.go:256] generating profile certs ...
	I0923 11:36:18.592562 2903914 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key
	I0923 11:36:18.592583 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt with IP's: []
	I0923 11:36:18.906095 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt ...
	I0923 11:36:18.906134 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: {Name:mk33102a3556c59cf025437aacb3628bfa41ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906340 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key ...
	I0923 11:36:18.906354 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key: {Name:mk1894a7120f896161f07a459fec6eb4fe11e236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906997 2903914 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb
	I0923 11:36:18.907023 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 11:36:19.176533 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb ...
	I0923 11:36:19.176571 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb: {Name:mkf3902710f18b86666bddc46eb9d246a2fd9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177433 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb ...
	I0923 11:36:19.177458 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb: {Name:mkcc729dc3699800a37a33c607924c19bb2a2d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177602 2903914 certs.go:381] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt
	I0923 11:36:19.177687 2903914 certs.go:385] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key
	I0923 11:36:19.177743 2903914 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key
	I0923 11:36:19.177760 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt with IP's: []
	I0923 11:36:19.407564 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt ...
	I0923 11:36:19.407592 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt: {Name:mk60cabd91332996a9c3d4f42fab2e735667c2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408332 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key ...
	I0923 11:36:19.408353 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key: {Name:mk59bc62a8c0559971fc4c2dcb7a472d97d949c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408565 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:36:19.408610 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:36:19.408642 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:36:19.408675 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 11:36:19.409266 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:36:19.438812 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:36:19.463399 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:36:19.487765 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:36:19.512554 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:36:19.537552 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:36:19.562531 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:36:19.587389 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:36:19.612515 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:36:19.638755 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:36:19.657011 2903914 ssh_runner.go:195] Run: openssl version
	I0923 11:36:19.662616 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:36:19.672363 2903914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675882 2903914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675950 2903914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.682948 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:36:19.692316 2903914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:36:19.695950 2903914 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:36:19.695999 2903914 kubeadm.go:392] StartCluster: {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:19.696080 2903914 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 11:36:19.696143 2903914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:36:19.737759 2903914 cri.go:89] found id: ""
	I0923 11:36:19.737855 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:36:19.746943 2903914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:36:19.756075 2903914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 11:36:19.756179 2903914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:36:19.767199 2903914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:36:19.767221 2903914 kubeadm.go:157] found existing configuration files:
	
	I0923 11:36:19.767362 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:36:19.776658 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:36:19.776748 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:36:19.785348 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:36:19.794057 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:36:19.794157 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:36:19.803152 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.812476 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:36:19.812574 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.821847 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:36:19.830557 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:36:19.830648 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:36:19.839343 2903914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 11:36:19.882483 2903914 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:36:19.882775 2903914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:36:19.901384 2903914 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 11:36:19.901507 2903914 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 11:36:19.901565 2903914 kubeadm.go:310] OS: Linux
	I0923 11:36:19.901637 2903914 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 11:36:19.901714 2903914 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 11:36:19.901776 2903914 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 11:36:19.901867 2903914 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 11:36:19.901970 2903914 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 11:36:19.902052 2903914 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 11:36:19.902117 2903914 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 11:36:19.902212 2903914 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 11:36:19.902294 2903914 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 11:36:19.967219 2903914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:36:19.967380 2903914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:36:19.967473 2903914 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:36:19.973209 2903914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:36:19.975734 2903914 out.go:235]   - Generating certificates and keys ...
	I0923 11:36:19.975831 2903914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:36:19.975904 2903914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:36:20.179743 2903914 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:36:21.047813 2903914 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:36:21.298950 2903914 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:36:22.135071 2903914 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:36:22.608576 2903914 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:36:22.609029 2903914 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:22.891026 2903914 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:36:22.891409 2903914 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:23.205606 2903914 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:36:23.489426 2903914 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:36:23.714238 2903914 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:36:23.714637 2903914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:36:23.917484 2903914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:36:24.438330 2903914 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:36:24.759712 2903914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:36:25.033943 2903914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:36:25.695483 2903914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:36:25.696139 2903914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:36:25.699152 2903914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:36:25.701227 2903914 out.go:235]   - Booting up control plane ...
	I0923 11:36:25.701329 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:36:25.702980 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:36:25.704104 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:36:25.714874 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:36:25.721041 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:36:25.721278 2903914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:36:25.822291 2903914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:36:25.822414 2903914 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:36:26.325513 2903914 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.540451ms
	I0923 11:36:26.325611 2903914 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:36:32.327784 2903914 kubeadm.go:310] [api-check] The API server is healthy after 6.002254671s
	I0923 11:36:32.349741 2903914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:36:32.363473 2903914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:36:32.390246 2903914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:36:32.390456 2903914 kubeadm.go:310] [mark-control-plane] Marking the node addons-348379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:36:32.404726 2903914 kubeadm.go:310] [bootstrap-token] Using token: 9jvvlf.nkkd2cu2r67rq0id
	I0923 11:36:32.407710 2903914 out.go:235]   - Configuring RBAC rules ...
	I0923 11:36:32.407927 2903914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:36:32.415924 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:36:32.424675 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:36:32.430654 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:36:32.435150 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:36:32.439236 2903914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:36:32.750475 2903914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:36:33.169716 2903914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:36:33.734975 2903914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:36:33.736330 2903914 kubeadm.go:310] 
	I0923 11:36:33.736408 2903914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:36:33.736419 2903914 kubeadm.go:310] 
	I0923 11:36:33.736495 2903914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:36:33.736505 2903914 kubeadm.go:310] 
	I0923 11:36:33.736531 2903914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:36:33.736593 2903914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:36:33.736647 2903914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:36:33.736656 2903914 kubeadm.go:310] 
	I0923 11:36:33.736710 2903914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:36:33.736719 2903914 kubeadm.go:310] 
	I0923 11:36:33.736766 2903914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:36:33.736774 2903914 kubeadm.go:310] 
	I0923 11:36:33.736827 2903914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:36:33.736907 2903914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:36:33.736980 2903914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:36:33.736989 2903914 kubeadm.go:310] 
	I0923 11:36:33.737074 2903914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:36:33.737166 2903914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:36:33.737175 2903914 kubeadm.go:310] 
	I0923 11:36:33.737258 2903914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737363 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 \
	I0923 11:36:33.737389 2903914 kubeadm.go:310] 	--control-plane 
	I0923 11:36:33.737397 2903914 kubeadm.go:310] 
	I0923 11:36:33.737482 2903914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:36:33.737497 2903914 kubeadm.go:310] 
	I0923 11:36:33.737577 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737677 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 
	I0923 11:36:33.741014 2903914 kubeadm.go:310] W0923 11:36:19.879102    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741352 2903914 kubeadm.go:310] W0923 11:36:19.880019    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741583 2903914 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 11:36:33.741697 2903914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:36:33.741739 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:33.741754 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:33.744827 2903914 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 11:36:33.747515 2903914 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 11:36:33.751591 2903914 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 11:36:33.751612 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 11:36:33.770489 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 11:36:34.059421 2903914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:36:34.059507 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.059553 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-348379 minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=addons-348379 minikube.k8s.io/primary=true
	I0923 11:36:34.223427 2903914 ops.go:34] apiserver oom_adj: -16
	I0923 11:36:34.223613 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.724103 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.223668 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.724171 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.224392 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.724281 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.223628 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.724422 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.224601 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.335390 2903914 kubeadm.go:1113] duration metric: took 4.275959417s to wait for elevateKubeSystemPrivileges
	I0923 11:36:38.335426 2903914 kubeadm.go:394] duration metric: took 18.639429725s to StartCluster
	I0923 11:36:38.335446 2903914 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336106 2903914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:36:38.336533 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336743 2903914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:38.336888 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:36:38.337142 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.337173 2903914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:36:38.337257 2903914 addons.go:69] Setting yakd=true in profile "addons-348379"
	I0923 11:36:38.337273 2903914 addons.go:234] Setting addon yakd=true in "addons-348379"
	I0923 11:36:38.337299 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.337814 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338086 2903914 addons.go:69] Setting inspektor-gadget=true in profile "addons-348379"
	I0923 11:36:38.338109 2903914 addons.go:234] Setting addon inspektor-gadget=true in "addons-348379"
	I0923 11:36:38.338133 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338208 2903914 addons.go:69] Setting metrics-server=true in profile "addons-348379"
	I0923 11:36:38.338232 2903914 addons.go:234] Setting addon metrics-server=true in "addons-348379"
	I0923 11:36:38.338262 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338580 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338757 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.339010 2903914 addons.go:69] Setting cloud-spanner=true in profile "addons-348379"
	I0923 11:36:38.339029 2903914 addons.go:234] Setting addon cloud-spanner=true in "addons-348379"
	I0923 11:36:38.339055 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.339531 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.343469 2903914 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-348379"
	I0923 11:36:38.343503 2903914 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-348379"
	I0923 11:36:38.345984 2903914 out.go:177] * Verifying Kubernetes components...
	I0923 11:36:38.346027 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.346500 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.348537 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:38.345645 2903914 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348379"
	I0923 11:36:38.365526 2903914 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:38.365567 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.366036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345659 2903914 addons.go:69] Setting default-storageclass=true in profile "addons-348379"
	I0923 11:36:38.373074 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348379"
	I0923 11:36:38.373503 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345667 2903914 addons.go:69] Setting gcp-auth=true in profile "addons-348379"
	I0923 11:36:38.384131 2903914 mustload.go:65] Loading cluster: addons-348379
	I0923 11:36:38.384331 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.384582 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345675 2903914 addons.go:69] Setting ingress=true in profile "addons-348379"
	I0923 11:36:38.394024 2903914 addons.go:234] Setting addon ingress=true in "addons-348379"
	I0923 11:36:38.394117 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.345679 2903914 addons.go:69] Setting ingress-dns=true in profile "addons-348379"
	I0923 11:36:38.401504 2903914 addons.go:234] Setting addon ingress-dns=true in "addons-348379"
	I0923 11:36:38.404456 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.405051 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345945 2903914 addons.go:69] Setting registry=true in profile "addons-348379"
	I0923 11:36:38.410134 2903914 addons.go:234] Setting addon registry=true in "addons-348379"
	I0923 11:36:38.410210 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.411036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345953 2903914 addons.go:69] Setting storage-provisioner=true in profile "addons-348379"
	I0923 11:36:38.345956 2903914 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-348379"
	I0923 11:36:38.345960 2903914 addons.go:69] Setting volcano=true in profile "addons-348379"
	I0923 11:36:38.345964 2903914 addons.go:69] Setting volumesnapshots=true in profile "addons-348379"
	I0923 11:36:38.411256 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.454084 2903914 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:36:38.460897 2903914 addons.go:234] Setting addon storage-provisioner=true in "addons-348379"
	I0923 11:36:38.460953 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.461441 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.478144 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:36:38.478222 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:36:38.478322 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.487269 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-348379"
	I0923 11:36:38.487639 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.517904 2903914 addons.go:234] Setting addon volcano=true in "addons-348379"
	I0923 11:36:38.517966 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.518467 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.546347 2903914 addons.go:234] Setting addon volumesnapshots=true in "addons-348379"
	I0923 11:36:38.546411 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.546987 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.575673 2903914 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:36:38.575914 2903914 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:36:38.579125 2903914 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:38.579150 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:36:38.579221 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.579592 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:36:38.579639 2903914 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:36:38.579689 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.617860 2903914 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:36:38.622150 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:36:38.622177 2903914 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:36:38.622251 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.622735 2903914 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:36:38.624431 2903914 addons.go:234] Setting addon default-storageclass=true in "addons-348379"
	I0923 11:36:38.624466 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.628278 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.632340 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.636671 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:36:38.637544 2903914 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:38.637561 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:36:38.637622 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.655509 2903914 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:36:38.662366 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:36:38.665082 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:36:38.665107 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:36:38.665177 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.670015 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:36:38.676295 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:38.676385 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:36:38.676464 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.683562 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:36:38.686396 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:38.686419 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:36:38.686489 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.712304 2903914 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-348379"
	I0923 11:36:38.712346 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.712766 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.727410 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:36:38.730115 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:36:38.735471 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 11:36:38.735719 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.757102 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.760000 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:36:38.766393 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:36:38.766549 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:36:38.769513 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:36:38.771099 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.774083 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:36:38.774459 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:38.774507 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:36:38.774615 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.780134 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 11:36:38.788013 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:36:38.790672 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:36:38.794853 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:36:38.794879 2903914 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:36:38.794943 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.799096 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:36:38.799171 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:36:38.799267 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.815113 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 11:36:38.821525 2903914 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:38.821606 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 11:36:38.822703 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.827569 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850382 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850811 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.867021 2903914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:38.867042 2903914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:36:38.867113 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.883391 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.898738 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.922806 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.939666 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.947337 2903914 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:36:38.950063 2903914 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:36:38.952600 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:38.952625 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:36:38.952695 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.965723 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.981816 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.987132 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.991897 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	W0923 11:36:39.007637 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007696 2903914 retry.go:31] will retry after 239.86918ms: ssh: handshake failed: EOF
	W0923 11:36:39.007745 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007761 2903914 retry.go:31] will retry after 150.66552ms: ssh: handshake failed: EOF
	I0923 11:36:39.023482 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.023945 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.169972 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:39.170259 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:36:39.494971 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:39.556492 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:39.608428 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:39.623138 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:36:39.623206 2903914 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:36:39.709728 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:36:39.709758 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:36:39.777118 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:39.827433 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:36:39.827460 2903914 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:36:39.863119 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:39.870165 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:36:39.870234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:36:39.883044 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:39.890345 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:39.896655 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:36:39.896731 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:36:39.967387 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:36:39.967475 2903914 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:36:40.018137 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:36:40.018231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:36:40.030206 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:36:40.030287 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:36:40.050639 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:40.058613 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.058708 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:36:40.150893 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:36:40.151005 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:36:40.242616 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:36:40.242710 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:36:40.287048 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:36:40.287084 2903914 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:36:40.486475 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:36:40.486504 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:36:40.493831 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.494920 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:36:40.494943 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:36:40.557986 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.558018 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:36:40.619083 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:40.619108 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:36:40.657629 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:36:40.657660 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:36:40.734595 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:36:40.734624 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:36:40.801552 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:36:40.801595 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:36:40.865890 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.930189 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:36:40.930231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:36:40.951956 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:36:40.951998 2903914 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:36:40.961846 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:36:40.961876 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:36:40.972100 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:41.216224 2903914 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.216250 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:36:41.260577 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:36:41.260621 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:36:41.278445 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.294750 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:36:41.294791 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:36:41.323456 2903914 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.153148456s)
	I0923 11:36:41.323525 2903914 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 11:36:41.323500 2903914 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.153503843s)
	I0923 11:36:41.324480 2903914 node_ready.go:35] waiting up to 6m0s for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330122 2903914 node_ready.go:49] node "addons-348379" has status "Ready":"True"
	I0923 11:36:41.330152 2903914 node_ready.go:38] duration metric: took 5.643428ms for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330163 2903914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:36:41.339728 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:41.543987 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:36:41.544058 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:36:41.648713 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.648741 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:36:41.746379 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:36:41.746407 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:36:41.828842 2903914 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-348379" context rescaled to 1 replicas
	I0923 11:36:41.902914 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.921919 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.42690557s)
	I0923 11:36:41.921979 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.365461937s)
	I0923 11:36:41.922006 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.313555468s)
	I0923 11:36:42.029240 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:36:42.029272 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:36:42.347806 2903914 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347883 2903914 pod_ready.go:82] duration metric: took 1.008114432s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	E0923 11:36:42.347920 2903914 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347968 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:42.549396 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:36:42.549431 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:36:42.830524 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:42.830560 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:36:43.211480 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:44.381316 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:44.884644 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.107485626s)
	I0923 11:36:45.843349 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:36:45.843447 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:45.872465 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.470866 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:36:46.702197 2903914 addons.go:234] Setting addon gcp-auth=true in "addons-348379"
	I0923 11:36:46.702270 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:46.702828 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:46.733217 2903914 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:36:46.733280 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:46.775337 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.854788 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:47.501824 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.638626321s)
	I0923 11:36:47.501900 2903914 addons.go:475] Verifying addon ingress=true in "addons-348379"
	I0923 11:36:47.504030 2903914 out.go:177] * Verifying ingress addon...
	I0923 11:36:47.506684 2903914 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:36:47.511450 2903914 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:36:47.511576 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.013380 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.544278 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.902915 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:49.026130 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.515743 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.786183 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.903059538s)
	I0923 11:36:49.786253 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.89583004s)
	I0923 11:36:49.786324 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.735598298s)
	I0923 11:36:49.786361 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.292505813s)
	I0923 11:36:49.786376 2903914 addons.go:475] Verifying addon registry=true in "addons-348379"
	I0923 11:36:49.786562 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.920643746s)
	I0923 11:36:49.786579 2903914 addons.go:475] Verifying addon metrics-server=true in "addons-348379"
	I0923 11:36:49.786620 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.814490533s)
	I0923 11:36:49.786930 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.508442064s)
	W0923 11:36:49.786964 2903914 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.786984 2903914 retry.go:31] will retry after 231.122068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.787069 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.884124931s)
	I0923 11:36:49.788510 2903914 out.go:177] * Verifying registry addon...
	I0923 11:36:49.789825 2903914 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-348379 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:36:49.793761 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:36:49.857501 2903914 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:36:49.857529 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.019003 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:50.091953 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.88041396s)
	I0923 11:36:50.091996 2903914 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:50.092188 2903914 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.35893014s)
	I0923 11:36:50.094620 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:50.094707 2903914 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:36:50.096428 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.099435 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:36:50.101290 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:36:50.102965 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:36:50.103035 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:36:50.192143 2903914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:36:50.192217 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.253203 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:36:50.253267 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:36:50.330971 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.356163 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.356234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:36:50.430758 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.512255 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.604722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.797707 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.023077 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.104450 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.297470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.356586 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:51.511733 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.617197 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.799992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.872269 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853212545s)
	I0923 11:36:51.872449 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441621639s)
	I0923 11:36:51.875653 2903914 addons.go:475] Verifying addon gcp-auth=true in "addons-348379"
	I0923 11:36:51.880701 2903914 out.go:177] * Verifying gcp-auth addon...
	I0923 11:36:51.886238 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:36:51.899994 2903914 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:36:52.012221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.113517 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.299559 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.512212 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.605423 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.799422 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.854916 2903914 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.854943 2903914 pod_ready.go:82] duration metric: took 10.506940522s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.854956 2903914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861027 2903914 pod_ready.go:93] pod "etcd-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.861059 2903914 pod_ready.go:82] duration metric: took 6.063045ms for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861112 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867900 2903914 pod_ready.go:93] pod "kube-apiserver-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.867934 2903914 pod_ready.go:82] duration metric: took 6.806328ms for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867947 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875428 2903914 pod_ready.go:93] pod "kube-controller-manager-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.875465 2903914 pod_ready.go:82] duration metric: took 7.477644ms for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875477 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881487 2903914 pod_ready.go:93] pod "kube-proxy-nqbmm" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.881527 2903914 pod_ready.go:82] duration metric: took 6.024203ms for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881558 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.013082 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.115359 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.261367 2903914 pod_ready.go:93] pod "kube-scheduler-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:53.261440 2903914 pod_ready.go:82] duration metric: took 379.865643ms for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.261468 2903914 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.298690 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:53.511926 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.605400 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.806273 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.013143 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.105281 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.297943 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.511688 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.604819 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.801146 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.012597 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.104977 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.267783 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:55.297710 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.513110 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.604704 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.797620 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.013461 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.105679 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.298722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.511968 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.607325 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.804470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.011592 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.104763 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.298808 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.511851 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.612444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.768189 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:57.797413 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.012279 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.104746 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.298054 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.511885 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.604675 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.801111 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.014365 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.115071 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.299589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.511922 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.604297 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.768227 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:59.798277 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.038178 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.105635 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.333590 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.512357 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.604689 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.799851 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.011394 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.105032 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.310842 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.511596 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.605046 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.768750 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:01.798428 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.013963 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.113737 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.297934 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.511024 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.605393 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.798053 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.012529 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.104923 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.298010 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.512557 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.604282 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.768828 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:03.798455 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.015940 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.104743 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.299059 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.512309 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.606024 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.798339 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.019138 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.104829 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.298375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.511973 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.604665 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.767100 2903914 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:05.767130 2903914 pod_ready.go:82] duration metric: took 12.505640545s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:05.767142 2903914 pod_ready.go:39] duration metric: took 24.436967089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:05.767156 2903914 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:05.767223 2903914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:05.781985 2903914 api_server.go:72] duration metric: took 27.445202984s to wait for apiserver process to appear ...
	I0923 11:37:05.782061 2903914 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:05.782092 2903914 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 11:37:05.789746 2903914 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 11:37:05.790754 2903914 api_server.go:141] control plane version: v1.31.1
	I0923 11:37:05.790781 2903914 api_server.go:131] duration metric: took 8.705461ms to wait for apiserver health ...
	I0923 11:37:05.790793 2903914 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:05.800773 2903914 system_pods.go:59] 18 kube-system pods found
	I0923 11:37:05.800813 2903914 system_pods.go:61] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.800824 2903914 system_pods.go:61] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.800829 2903914 system_pods.go:61] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.800839 2903914 system_pods.go:61] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.800844 2903914 system_pods.go:61] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.800848 2903914 system_pods.go:61] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.800852 2903914 system_pods.go:61] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.800856 2903914 system_pods.go:61] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.800860 2903914 system_pods.go:61] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.800866 2903914 system_pods.go:61] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.800870 2903914 system_pods.go:61] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.800875 2903914 system_pods.go:61] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.800884 2903914 system_pods.go:61] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.800892 2903914 system_pods.go:61] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.800908 2903914 system_pods.go:61] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.800916 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800924 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800931 2903914 system_pods.go:61] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.800938 2903914 system_pods.go:74] duration metric: took 10.139277ms to wait for pod list to return data ...
	I0923 11:37:05.800948 2903914 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:37:05.802797 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.803949 2903914 default_sa.go:45] found service account: "default"
	I0923 11:37:05.803977 2903914 default_sa.go:55] duration metric: took 3.018472ms for default service account to be created ...
	I0923 11:37:05.803986 2903914 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:37:05.813725 2903914 system_pods.go:86] 18 kube-system pods found
	I0923 11:37:05.813761 2903914 system_pods.go:89] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.813771 2903914 system_pods.go:89] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.813777 2903914 system_pods.go:89] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.813785 2903914 system_pods.go:89] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.813789 2903914 system_pods.go:89] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.813793 2903914 system_pods.go:89] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.813798 2903914 system_pods.go:89] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.813810 2903914 system_pods.go:89] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.813815 2903914 system_pods.go:89] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.813824 2903914 system_pods.go:89] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.813828 2903914 system_pods.go:89] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.813835 2903914 system_pods.go:89] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.813845 2903914 system_pods.go:89] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.813851 2903914 system_pods.go:89] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.813859 2903914 system_pods.go:89] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.813866 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813877 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813881 2903914 system_pods.go:89] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.813889 2903914 system_pods.go:126] duration metric: took 9.896579ms to wait for k8s-apps to be running ...
	I0923 11:37:05.813904 2903914 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:37:05.813964 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:37:05.826195 2903914 system_svc.go:56] duration metric: took 12.281298ms WaitForService to wait for kubelet
	I0923 11:37:05.826224 2903914 kubeadm.go:582] duration metric: took 27.489446441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:37:05.826246 2903914 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:05.829405 2903914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 11:37:05.829438 2903914 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:05.829451 2903914 node_conditions.go:105] duration metric: took 3.199247ms to run NodePressure ...
	I0923 11:37:05.829481 2903914 start.go:241] waiting for startup goroutines ...
	I0923 11:37:06.016364 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.105089 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.297935 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:06.512579 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.605014 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.797841 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.011801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.104294 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.298088 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.511664 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.604893 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.801940 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.013764 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.105345 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.299766 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.512208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.605989 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.797920 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.013457 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.113285 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.298479 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.519639 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.621187 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.798092 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.023070 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.120870 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.299336 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.511998 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.604502 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.798656 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.011605 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.104357 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.297872 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.510891 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.605182 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.798112 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.016742 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.106392 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.302890 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.514134 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.606249 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.802344 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.012829 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.106630 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.297952 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.520400 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.609375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.799034 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.014344 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.118058 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.298788 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.510806 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.631742 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.797720 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.019826 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.106226 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.298830 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.511804 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.605411 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.798246 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.012028 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.104868 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.297740 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.524665 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.625850 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.797603 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.011338 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.103818 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.298307 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.512577 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.605193 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.798142 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.013543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.105035 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.297589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.512084 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.605341 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.798244 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.012543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.113664 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.298444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.511599 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.606804 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.798859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.017663 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.106755 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.297564 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.512160 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.604958 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.797812 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.013732 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.105616 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.298921 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.510795 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.604820 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.797800 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.012174 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.114441 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.298237 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.513158 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.625275 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.799161 2903914 kapi.go:107] duration metric: took 33.005400732s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:37:23.013096 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.105126 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:23.511910 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.613223 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.014075 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.105028 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.511000 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.605248 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.014495 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.111460 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.512685 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.612274 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.013492 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.105721 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.511127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.605859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.012240 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.105210 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.511957 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.604964 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.012182 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.104813 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.511895 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.605303 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.013053 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.104426 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.511346 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.603992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.014157 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.118062 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.511690 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.604969 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.014750 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.105518 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.511111 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.605002 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.012362 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.104960 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.511542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.604045 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.013753 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.104350 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.512293 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.614483 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:34.016127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:34.104734 2903914 kapi.go:107] duration metric: took 44.005304527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:37:34.510755 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.019903 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.510951 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.014603 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.510966 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.016710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.511532 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.013339 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.511853 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.012335 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.511569 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.013791 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.511129 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.011102 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.510793 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.015629 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.512029 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.011711 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.510801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.011721 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.511165 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.039643 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.511176 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.011883 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.511761 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.011001 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.511048 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.012925 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.511632 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.010792 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.511167 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.018047 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.511473 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.013208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.511710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.011604 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.512230 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.011680 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.511429 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.012725 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.511977 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.013759 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.512155 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.012542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.512221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:57.011684 2903914 kapi.go:107] duration metric: took 1m9.505004156s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:42:51.886491 2903914 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 11:42:51.886551 2903914 kapi.go:107] duration metric: took 6m0.000318633s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 11:42:51.886644 2903914 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 11:42:51.888644 2903914 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0923 11:42:51.890515 2903914 addons.go:510] duration metric: took 6m13.553319939s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress]
	I0923 11:42:51.890575 2903914 start.go:246] waiting for cluster config update ...
	I0923 11:42:51.890600 2903914 start.go:255] writing updated cluster config ...
	I0923 11:42:51.890918 2903914 ssh_runner.go:195] Run: rm -f paused
	I0923 11:42:52.246444 2903914 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:42:52.248672 2903914 out.go:177] * Done! kubectl is now configured to use "addons-348379" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	052af9fc2b4c1       289a818c8d9c5       26 minutes ago      Running             controller                               0                   47eb6c64c7a7d       ingress-nginx-controller-bc57996ff-6wq45
	967992dcbaf35       ee6d597e62dc8       26 minutes ago      Running             csi-snapshotter                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c27f769c6bce0       642ded511e141       26 minutes ago      Running             csi-provisioner                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	0faa259c6b420       922312104da8a       26 minutes ago      Running             liveness-probe                           0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	994b2f0a53275       08f6b2990811a       26 minutes ago      Running             hostpath                                 0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	ccaa6826447e2       9a80d518f102c       27 minutes ago      Running             csi-attacher                             0                   3c219582bf398       csi-hostpath-attacher-0
	63bc3e7f0cc00       420193b27261a       27 minutes ago      Exited              patch                                    0                   6fd7a092c3487       ingress-nginx-admission-patch-bfw56
	c5db857da31e6       4d1e5c3e97420       27 minutes ago      Running             volume-snapshot-controller               0                   311300b7a364c       snapshot-controller-56fcc65765-d47ng
	d1ac50a6261dd       420193b27261a       27 minutes ago      Exited              create                                   0                   895d61d9c16ad       ingress-nginx-admission-create-fwt6v
	ac4d50ae15f93       4d1e5c3e97420       27 minutes ago      Running             volume-snapshot-controller               0                   698a149c7f280       snapshot-controller-56fcc65765-dchr7
	ed38cab12122b       0107d56dbc0be       27 minutes ago      Running             node-driver-registrar                    0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	5e5a8e69b34fd       7ce2150c8929b       27 minutes ago      Running             local-path-provisioner                   0                   9bc0d2214e2a7       local-path-provisioner-86d989889c-h5pl9
	cecddb85ce0f0       487fa743e1e22       27 minutes ago      Running             csi-resizer                              0                   85d2097f2da83       csi-hostpath-resizer-0
	a6cc88c765de7       1461903ec4fe9       27 minutes ago      Running             csi-external-health-monitor-controller   0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c37000ef28652       35508c2f890c4       27 minutes ago      Running             minikube-ingress-dns                     0                   d881770814e2f       kube-ingress-dns-minikube
	497f8c41b274e       2f6c962e7b831       27 minutes ago      Running             coredns                                  0                   d2d48d489a636       coredns-7c65d6cfc9-ppz9h
	b05bd4b18e280       ba04bb24b9575       27 minutes ago      Running             storage-provisioner                      0                   e816c2d6b5461       storage-provisioner
	a7e55167b7b39       6a23fa8fd2b78       27 minutes ago      Running             kindnet-cni                              0                   9389ee1ac67f5       kindnet-4kcdh
	005547c4c4723       24a140c548c07       27 minutes ago      Running             kube-proxy                               0                   a4defdda67173       kube-proxy-nqbmm
	9255b7a6f4a59       7f8aa378bb47d       28 minutes ago      Running             kube-scheduler                           0                   cae78edec7e41       kube-scheduler-addons-348379
	92413a7a8d6f6       279f381cb3736       28 minutes ago      Running             kube-controller-manager                  0                   731c7b79dc027       kube-controller-manager-addons-348379
	20e5f68e09619       d3f53a98c0a9d       28 minutes ago      Running             kube-apiserver                           0                   bdcb66b095a48       kube-apiserver-addons-348379
	8abc59946512a       27e3830e14027       28 minutes ago      Running             etcd                                     0                   a8c7d184a28f5       etcd-addons-348379
	
	
	==> containerd <==
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.390505956Z" level=info msg="RemovePodSandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" returns successfully"
	Sep 23 11:58:17 addons-348379 containerd[816]: time="2024-09-23T11:58:17.118348752Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\""
	Sep 23 11:58:17 addons-348379 containerd[816]: time="2024-09-23T11:58:17.120902333Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:58:17 addons-348379 containerd[816]: time="2024-09-23T11:58:17.173967253Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized"
	Sep 23 11:58:17 addons-348379 containerd[816]: time="2024-09-23T11:58:17.174024828Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: active requests=0, bytes read=0"
	Sep 23 11:58:31 addons-348379 containerd[816]: time="2024-09-23T11:58:31.117587208Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\""
	Sep 23 11:58:31 addons-348379 containerd[816]: time="2024-09-23T11:58:31.120623305Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:58:31 addons-348379 containerd[816]: time="2024-09-23T11:58:31.189276105Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized"
	Sep 23 11:58:31 addons-348379 containerd[816]: time="2024-09-23T11:58:31.189324391Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: active requests=0, bytes read=0"
	Sep 23 11:58:34 addons-348379 containerd[816]: time="2024-09-23T11:58:34.117220417Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\""
	Sep 23 11:58:34 addons-348379 containerd[816]: time="2024-09-23T11:58:34.119466939Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:58:34 addons-348379 containerd[816]: time="2024-09-23T11:58:34.176509515Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized"
	Sep 23 11:58:34 addons-348379 containerd[816]: time="2024-09-23T11:58:34.176619636Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: active requests=0, bytes read=0"
	Sep 23 12:03:28 addons-348379 containerd[816]: time="2024-09-23T12:03:28.117650804Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\""
	Sep 23 12:03:28 addons-348379 containerd[816]: time="2024-09-23T12:03:28.119866688Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 12:03:28 addons-348379 containerd[816]: time="2024-09-23T12:03:28.180102827Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized"
	Sep 23 12:03:28 addons-348379 containerd[816]: time="2024-09-23T12:03:28.180216139Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: active requests=0, bytes read=0"
	Sep 23 12:03:36 addons-348379 containerd[816]: time="2024-09-23T12:03:36.117935456Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\""
	Sep 23 12:03:36 addons-348379 containerd[816]: time="2024-09-23T12:03:36.120097596Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 12:03:36 addons-348379 containerd[816]: time="2024-09-23T12:03:36.170409503Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized"
	Sep 23 12:03:36 addons-348379 containerd[816]: time="2024-09-23T12:03:36.170449297Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: active requests=0, bytes read=0"
	Sep 23 12:03:38 addons-348379 containerd[816]: time="2024-09-23T12:03:38.117655018Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\""
	Sep 23 12:03:38 addons-348379 containerd[816]: time="2024-09-23T12:03:38.120041576Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 12:03:38 addons-348379 containerd[816]: time="2024-09-23T12:03:38.189871126Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized"
	Sep 23 12:03:38 addons-348379 containerd[816]: time="2024-09-23T12:03:38.189908525Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: active requests=0, bytes read=0"
	
	
	==> coredns [497f8c41b274e14c72d9933f13fac36b6c4acf8def62c9f3205823623e58226d] <==
	[INFO] 10.244.0.8:44197 - 54267 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180151s
	[INFO] 10.244.0.8:41305 - 6638 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001704017s
	[INFO] 10.244.0.8:41305 - 35565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141999s
	[INFO] 10.244.0.8:40738 - 27223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080918s
	[INFO] 10.244.0.8:40738 - 10324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129353s
	[INFO] 10.244.0.8:56286 - 60697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080098s
	[INFO] 10.244.0.8:56286 - 24093 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043076s
	[INFO] 10.244.0.8:43089 - 14336 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004978s
	[INFO] 10.244.0.8:43089 - 63494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035692s
	[INFO] 10.244.0.8:57209 - 23569 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041272s
	[INFO] 10.244.0.8:57209 - 18191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041304s
	[INFO] 10.244.0.8:58538 - 8344 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004349043s
	[INFO] 10.244.0.8:58538 - 22686 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003876717s
	[INFO] 10.244.0.8:48601 - 29559 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050166s
	[INFO] 10.244.0.8:48601 - 628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054556s
	[INFO] 10.244.0.8:43467 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093161s
	[INFO] 10.244.0.8:43467 - 32245 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042002s
	[INFO] 10.244.0.8:54486 - 7532 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066798s
	[INFO] 10.244.0.8:54486 - 50287 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038416s
	[INFO] 10.244.0.8:33247 - 25963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052963s
	[INFO] 10.244.0.8:33247 - 9581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
	[INFO] 10.244.0.8:42814 - 61299 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001881616s
	[INFO] 10.244.0.8:42814 - 6802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001453827s
	[INFO] 10.244.0.8:59224 - 13301 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046384s
	[INFO] 10.244.0.8:59224 - 30455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035372s
	
	
	==> describe nodes <==
	Name:               addons-348379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-348379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=addons-348379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-348379
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-348379"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:36:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-348379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:04:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:03:35 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:03:35 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:03:35 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:03:35 +0000   Mon, 23 Sep 2024 11:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-348379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb92b9be866447b6af2ef85f12013e1b
	  System UUID:                84447a6a-e17a-42ff-ba46-fb82e93bc172
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6wq45    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         27m
	  kube-system                 coredns-7c65d6cfc9-ppz9h                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 csi-hostpathplugin-zdwf8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 etcd-addons-348379                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         27m
	  kube-system                 kindnet-4kcdh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27m
	  kube-system                 kube-apiserver-addons-348379                250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-addons-348379       200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-nqbmm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-addons-348379                100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 snapshot-controller-56fcc65765-d47ng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 snapshot-controller-56fcc65765-dchr7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  local-path-storage          local-path-provisioner-86d989889c-h5pl9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-admission-7f54bd7598-s85bg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-admission-init-f2bhm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-controllers-5ff7c5d4db-w658s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  volcano-system              volcano-scheduler-79dc4b78bb-2vx88          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27m                kube-proxy       
	  Normal   NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28m (x7 over 28m)  kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 27m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 27m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m                kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m                kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27m                node-controller  Node addons-348379 event: Registered Node addons-348379 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [8abc59946512ab5ab4d902de194545051c7e577bd5196212bf85326ca705cd43] <==
	{"level":"info","ts":"2024-09-23T11:36:27.825628Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-348379 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:36:27.825777Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827732Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.831921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.856413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.847318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:36:27.847973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.857537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.856653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:28.587946Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1709}
	{"level":"info","ts":"2024-09-23T11:46:28.663638Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1709,"took":"75.159252ms","hash":39109330,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4308992,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T11:46:28.663685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":39109330,"revision":1709,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T11:51:28.595181Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2154}
	{"level":"info","ts":"2024-09-23T11:51:28.614611Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2154,"took":"18.702318ms","hash":1848337147,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":3706880,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-09-23T11:51:28.614664Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1848337147,"revision":2154,"compact-revision":1709}
	{"level":"info","ts":"2024-09-23T11:56:28.605480Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2714}
	{"level":"info","ts":"2024-09-23T11:56:28.626031Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2714,"took":"19.936209ms","hash":17698238,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":3883008,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-23T11:56:28.626090Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":17698238,"revision":2714,"compact-revision":2154}
	{"level":"info","ts":"2024-09-23T12:01:28.612196Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3232}
	{"level":"info","ts":"2024-09-23T12:01:28.631783Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":3232,"took":"18.945715ms","hash":352004518,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":3051520,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-09-23T12:01:28.631839Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":352004518,"revision":3232,"compact-revision":2714}
	
	
	==> kernel <==
	 12:04:28 up 1 day, 19:46,  0 users,  load average: 0.18, 0.20, 0.67
	Linux addons-348379 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a7e55167b7b39430011e4998f44d30eaec7edc15414dc232724161a09a27e599] <==
	I0923 12:02:20.314283       1 main.go:299] handling current node
	I0923 12:02:30.312130       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:02:30.312162       1 main.go:299] handling current node
	I0923 12:02:40.312767       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:02:40.312805       1 main.go:299] handling current node
	I0923 12:02:50.317008       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:02:50.317043       1 main.go:299] handling current node
	I0923 12:03:00.314362       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:03:00.314526       1 main.go:299] handling current node
	I0923 12:03:10.320953       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:03:10.320994       1 main.go:299] handling current node
	I0923 12:03:20.316293       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:03:20.316329       1 main.go:299] handling current node
	I0923 12:03:30.314829       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:03:30.315033       1 main.go:299] handling current node
	I0923 12:03:40.313031       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:03:40.313079       1 main.go:299] handling current node
	I0923 12:03:50.313081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:03:50.313121       1 main.go:299] handling current node
	I0923 12:04:00.317210       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:04:00.317320       1 main.go:299] handling current node
	I0923 12:04:10.312929       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:04:10.312962       1 main.go:299] handling current node
	I0923 12:04:20.312240       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 12:04:20.312342       1 main.go:299] handling current node
	
	
	==> kube-apiserver [20e5f68e09619b9d622e831b0c429aae0e245dfb5c647d9e6fd9193c6cdfedac] <==
	W0923 12:00:54.925028       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:00:55.027394       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:00:55.027442       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:00:55.030119       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:01:07.555994       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:01:07.556037       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:01:07.557937       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:01:54.934268       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:01:54.934311       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:01:54.936065       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:01:55.036531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:01:55.036580       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:01:55.038488       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:02:54.945805       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:02:54.945848       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:02:54.947661       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:02:55.045537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:02:55.045592       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:02:55.048248       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:03:54.956675       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:03:54.956717       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:03:54.958421       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 12:03:55.054615       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 12:03:55.054665       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 12:03:55.056319       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	
	
	==> kube-controller-manager [92413a7a8d6f6d058e961a0759535062b02d7d50e5012e81614e6290ce3465b3] <==
	E0923 12:01:54.936758       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:01:54.938264       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:01:55.039494       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:01:55.040829       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	W0923 12:02:21.838739       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:02:21.838789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 12:02:54.948350       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:02:54.949701       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:02:55.049302       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:02:55.050715       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	W0923 12:03:15.342134       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:03:15.342186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:03:35.184130       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-348379"
	I0923 12:03:40.134011       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="81.435µs"
	I0923 12:03:50.131364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="41.723µs"
	I0923 12:03:50.145172       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 12:03:54.131217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="62.005µs"
	E0923 12:03:54.959188       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:03:54.960556       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:03:55.056999       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 12:03:55.058260       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 12:04:01.129130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="79.286µs"
	I0923 12:04:04.129373       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	W0923 12:04:11.923460       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:04:11.923509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [005547c4c4723cd6ce4dac939ebab2b2d89e428b99ec971a179497842dcb5abe] <==
	I0923 11:36:39.711887       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:36:39.808570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:36:39.808647       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:36:39.869255       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:36:39.869320       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:36:39.872284       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:36:39.872851       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:36:39.872865       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:36:39.885041       1 config.go:199] "Starting service config controller"
	I0923 11:36:39.885075       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:36:39.885100       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:36:39.885105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:36:39.886870       1 config.go:328] "Starting node config controller"
	I0923 11:36:39.886882       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:36:39.986150       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:36:39.986235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:36:39.987256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9255b7a6f4a590493be04a5bbbdf14b0efddfe1f321b8a25d2eed1055c6741df] <==
	W0923 11:36:31.587742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:36:31.587843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:36:31.587943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:36:31.588052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:36:31.588221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.588380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:36:31.588540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:36:31.588655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:36:31.588819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:36:31.588921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:36:31.589185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:36:32.574074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:03:28 addons-348379 kubelet[1462]: E0923 12:03:28.180403    1462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 12:03:28 addons-348379 kubelet[1462]: E0923 12:03:28.180474    1462 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882"
	Sep 23 12:03:28 addons-348379 kubelet[1462]: E0923 12:03:28.180593    1462 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-scheduler,Image:docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882,Command:[],Args:[--logtostderr --scheduler-conf=/volcano.scheduler/volcano-scheduler.conf --enable-healthz=true --enable-metrics=true --leader-elect=false -v=3 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEBUG_SOCKET_DIR,Value:/tmp/klog-socks,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scheduler-config,ReadOnly:false,MountPath:/volcano.scheduler,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:klog-sock,ReadOnly:false,MountPath:/tmp/klog-socks,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kub
e-api-access-hhr2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-scheduler-79dc4b78bb-2vx88_volcano-system(74b4a12a-ef6c-40d9-a5f6-e73012730d8a): ErrImagePull: failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/
manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized" logger="UnhandledError"
	Sep 23 12:03:28 addons-348379 kubelet[1462]: E0923 12:03:28.182013    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\": failed to resolve reference \\\"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 12:03:36 addons-348379 kubelet[1462]: E0923 12:03:36.170731    1462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 12:03:36 addons-348379 kubelet[1462]: E0923 12:03:36.170800    1462 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e"
	Sep 23 12:03:36 addons-348379 kubelet[1462]: E0923 12:03:36.170905    1462 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:main,Image:docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e,Command:[./gen-admission-secret.sh --service volcano-admission-service --namespace volcano-system --secret volcano-admission-secret],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessageP
olicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-admission-init-f2bhm_volcano-system(5ae35267-183a-42ba-96bc-03dac14139ac): ErrImagePull: failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized" logger="UnhandledError"
	Sep 23 12:03:36 addons-348379 kubelet[1462]: E0923 12:03:36.172481    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\": failed to resolve reference \\\"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 12:03:38 addons-348379 kubelet[1462]: E0923 12:03:38.190160    1462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 12:03:38 addons-348379 kubelet[1462]: E0923 12:03:38.190227    1462 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized" image="docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de"
	Sep 23 12:03:38 addons-348379 kubelet[1462]: E0923 12:03:38.190324    1462 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:volcano-controllers,Image:docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de,Command:[],Args:[--logtostderr --enable-healthz=true --leader-elect=false -v=4 2>&1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-585sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevic
e{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod volcano-controllers-5ff7c5d4db-w658s_volcano-system(eb364615-4484-4d2d-80e2-1bf54875b4a1): ErrImagePull: failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized" logger="UnhandledError"
	Sep 23 12:03:38 addons-348379 kubelet[1462]: E0923 12:03:38.191728    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\": failed to resolve reference \\\"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 12:03:40 addons-348379 kubelet[1462]: E0923 12:03:40.117655    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 12:03:45 addons-348379 kubelet[1462]: E0923 12:03:45.118354    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[admission-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="volcano-system/volcano-admission-7f54bd7598-s85bg" podUID="0b71580f-19db-4c01-80d4-e477def83079"
	Sep 23 12:03:50 addons-348379 kubelet[1462]: E0923 12:03:50.117409    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 12:03:50 addons-348379 kubelet[1462]: E0923 12:03:50.119074    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 12:03:54 addons-348379 kubelet[1462]: E0923 12:03:54.117609    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 12:04:01 addons-348379 kubelet[1462]: E0923 12:04:01.118688    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 12:04:04 addons-348379 kubelet[1462]: E0923 12:04:04.118466    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 12:04:09 addons-348379 kubelet[1462]: E0923 12:04:09.117218    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 12:04:14 addons-348379 kubelet[1462]: E0923 12:04:14.117194    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 12:04:17 addons-348379 kubelet[1462]: E0923 12:04:17.117135    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 12:04:22 addons-348379 kubelet[1462]: E0923 12:04:22.117683    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 12:04:25 addons-348379 kubelet[1462]: E0923 12:04:25.117755    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 12:04:28 addons-348379 kubelet[1462]: E0923 12:04:28.117952    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	
	
	==> storage-provisioner [b05bd4b18e2804c1706af6011e03e349f643f23d7f968ca74ffb0f2eaf78047d] <==
	I0923 11:36:43.550112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:36:43.591903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:36:43.592066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:36:43.604224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:36:43.604420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	I0923 11:36:43.605459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdf468f-763c-4a54-98e3-d90ea0e2e8e5", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348379_e926b365-1f0f-4822-899e-75d077991921 became leader
	I0923 11:36:43.707026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
helpers_test.go:261: (dbg) Run:  kubectl --context addons-348379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1 (94.425457ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fwt6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bfw56" not found
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-s85bg" not found
	Error from server (NotFound): pods "volcano-admission-init-f2bhm" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-w658s" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-2vx88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1
--- FAIL: TestAddons/parallel/Ingress (483.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (374.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 11:49:54.437638 2903158 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 11:49:54.445131 2903158 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 11:49:54.445490 2903158 kapi.go:107] duration metric: took 9.540865ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 9.667642ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-348379 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348379 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-348379 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:518: (dbg) Non-zero exit: kubectl --context addons-348379 create -f testdata/csi-hostpath-driver/pv-pod.yaml: exit status 1 (160.539929ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): error when creating "testdata/csi-hostpath-driver/pv-pod.yaml": Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:520: creating pod with kubectl --context addons-348379 create -f testdata/csi-hostpath-driver/pv-pod.yaml failed: exit status 1
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:523: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:523: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
addons_test.go:523: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-09-23 11:56:06.184833038 +0000 UTC m=+1232.694880385
addons_test.go:524: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-348379
helpers_test.go:235: (dbg) docker inspect addons-348379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751",
	        "Created": "2024-09-23T11:36:14.086186806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2904413,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:36:14.226479889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hostname",
	        "HostsPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hosts",
	        "LogPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751-json.log",
	        "Name": "/addons-348379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-348379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-348379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-348379",
	                "Source": "/var/lib/docker/volumes/addons-348379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-348379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-348379",
	                "name.minikube.sigs.k8s.io": "addons-348379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0be556c33c7ef40a6e70f0f396b5b7933e0b3ad3ca535a0d80fe041626578e74",
	            "SandboxKey": "/var/run/docker/netns/0be556c33c7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-348379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d5b669f6f456e67aee85da3145f607fecadf2af36a162932dd5e9bc9ffffee31",
	                    "EndpointID": "406dde2b8517c79d36b643a1ab3c5c13c7554eb0ba705342617104596090b341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-348379",
	                        "1973e07b6a14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-348379 -n addons-348379
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 logs -n 25: (1.406797119s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-611017              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -o=json --download-only              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-423730              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | download-docker-021793               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-021793            | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | binary-mirror-046209                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34157               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-046209              | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| addons  | enable dashboard -p                  | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| start   | -p addons-348379 --wait=true         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| ip      | addons-348379 ip                     | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | -p addons-348379                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | -p addons-348379                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:49.628900 2903914 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:49.629020 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629030 2903914 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:49.629036 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629290 2903914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 11:35:49.629765 2903914 out.go:352] Setting JSON to false
	I0923 11:35:49.630698 2903914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155897,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:49.630769 2903914 start.go:139] virtualization:  
	I0923 11:35:49.632856 2903914 out.go:177] * [addons-348379] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:35:49.634664 2903914 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:35:49.634735 2903914 notify.go:220] Checking for updates...
	I0923 11:35:49.637703 2903914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:49.639443 2903914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:49.640961 2903914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:49.642654 2903914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 11:35:49.644154 2903914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:35:49.646005 2903914 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:49.674816 2903914 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:49.674959 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.740942 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.731429543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.741058 2903914 docker.go:318] overlay module found
	I0923 11:35:49.742922 2903914 out.go:177] * Using the docker driver based on user configuration
	I0923 11:35:49.744404 2903914 start.go:297] selected driver: docker
	I0923 11:35:49.744427 2903914 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:49.744443 2903914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:35:49.745066 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.807015 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.798069613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.807243 2903914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:49.807490 2903914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:35:49.810585 2903914 out.go:177] * Using Docker driver with root privileges
	I0923 11:35:49.812100 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:35:49.812180 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:49.812195 2903914 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:49.812295 2903914 start.go:340] cluster config:
	{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:49.814088 2903914 out.go:177] * Starting "addons-348379" primary control-plane node in "addons-348379" cluster
	I0923 11:35:49.815629 2903914 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:49.817311 2903914 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:49.818975 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:49.819031 2903914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:49.819044 2903914 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:49.819072 2903914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:49.819129 2903914 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 11:35:49.819140 2903914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 11:35:49.819629 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:35:49.819663 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json: {Name:mk57bf6c9d1a024b95a9182333fb0e843fbdc049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:35:49.834226 2903914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:49.834349 2903914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:49.834370 2903914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:35:49.834376 2903914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:35:49.834383 2903914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:35:49.834388 2903914 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 11:36:07.477009 2903914 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 11:36:07.477055 2903914 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:36:07.477102 2903914 start.go:360] acquireMachinesLock for addons-348379: {Name:mk0afc734c4276635047574670b52ff1624a597d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:07.477241 2903914 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "addons-348379"
	I0923 11:36:07.477273 2903914 start.go:93] Provisioning new machine with config: &{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:07.477361 2903914 start.go:125] createHost starting for "" (driver="docker")
	I0923 11:36:07.479499 2903914 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 11:36:07.479768 2903914 start.go:159] libmachine.API.Create for "addons-348379" (driver="docker")
	I0923 11:36:07.479806 2903914 client.go:168] LocalClient.Create starting
	I0923 11:36:07.479934 2903914 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem
	I0923 11:36:07.656758 2903914 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem
	I0923 11:36:07.895005 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 11:36:07.911037 2903914 cli_runner.go:211] docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 11:36:07.911143 2903914 network_create.go:284] running [docker network inspect addons-348379] to gather additional debugging logs...
	I0923 11:36:07.911165 2903914 cli_runner.go:164] Run: docker network inspect addons-348379
	W0923 11:36:07.926743 2903914 cli_runner.go:211] docker network inspect addons-348379 returned with exit code 1
	I0923 11:36:07.926792 2903914 network_create.go:287] error running [docker network inspect addons-348379]: docker network inspect addons-348379: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-348379 not found
	I0923 11:36:07.926806 2903914 network_create.go:289] output of [docker network inspect addons-348379]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-348379 not found
	
	** /stderr **
	I0923 11:36:07.926904 2903914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:07.941390 2903914 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eb90}
	I0923 11:36:07.941437 2903914 network_create.go:124] attempt to create docker network addons-348379 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 11:36:07.941499 2903914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348379 addons-348379
	I0923 11:36:08.013253 2903914 network_create.go:108] docker network addons-348379 192.168.49.0/24 created
	I0923 11:36:08.013292 2903914 kic.go:121] calculated static IP "192.168.49.2" for the "addons-348379" container
	I0923 11:36:08.013374 2903914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 11:36:08.030080 2903914 cli_runner.go:164] Run: docker volume create addons-348379 --label name.minikube.sigs.k8s.io=addons-348379 --label created_by.minikube.sigs.k8s.io=true
	I0923 11:36:08.048023 2903914 oci.go:103] Successfully created a docker volume addons-348379
	I0923 11:36:08.048128 2903914 cli_runner.go:164] Run: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 11:36:10.056258 2903914 cli_runner.go:217] Completed: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.008084592s)
	I0923 11:36:10.056295 2903914 oci.go:107] Successfully prepared a docker volume addons-348379
	I0923 11:36:10.056323 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:10.056345 2903914 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 11:36:10.056440 2903914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 11:36:14.019670 2903914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.963182399s)
	I0923 11:36:14.019706 2903914 kic.go:203] duration metric: took 3.963357873s to extract preloaded images to volume ...
	W0923 11:36:14.019879 2903914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 11:36:14.020008 2903914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 11:36:14.071499 2903914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348379 --name addons-348379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348379 --network addons-348379 --ip 192.168.49.2 --volume addons-348379:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 11:36:14.406562 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Running}}
	I0923 11:36:14.430276 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.456577 2903914 cli_runner.go:164] Run: docker exec addons-348379 stat /var/lib/dpkg/alternatives/iptables
	I0923 11:36:14.524343 2903914 oci.go:144] the created container "addons-348379" has a running status.
	I0923 11:36:14.524373 2903914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa...
	I0923 11:36:14.817157 2903914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 11:36:14.839902 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.872963 2903914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 11:36:14.872990 2903914 kic_runner.go:114] Args: [docker exec --privileged addons-348379 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 11:36:14.951457 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.976890 2903914 machine.go:93] provisionDockerMachine start ...
	I0923 11:36:14.977005 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.007610 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.007901 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.007913 2903914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:36:15.203560 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.203583 2903914 ubuntu.go:169] provisioning hostname "addons-348379"
	I0923 11:36:15.203659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.229660 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.229941 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.229961 2903914 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-348379 && echo "addons-348379" | sudo tee /etc/hostname
	I0923 11:36:15.387302 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.387387 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.409587 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.409829 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.409846 2903914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-348379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348379/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-348379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:36:15.552128 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:36:15.552218 2903914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 11:36:15.552276 2903914 ubuntu.go:177] setting up certificates
	I0923 11:36:15.552305 2903914 provision.go:84] configureAuth start
	I0923 11:36:15.552432 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:15.571035 2903914 provision.go:143] copyHostCerts
	I0923 11:36:15.571118 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 11:36:15.571374 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 11:36:15.571463 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 11:36:15.571520 2903914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.addons-348379 san=[127.0.0.1 192.168.49.2 addons-348379 localhost minikube]
	I0923 11:36:15.936111 2903914 provision.go:177] copyRemoteCerts
	I0923 11:36:15.936188 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:36:15.936230 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.954080 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.048521 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:36:16.073032 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:36:16.096585 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:36:16.120809 2903914 provision.go:87] duration metric: took 568.476502ms to configureAuth
	I0923 11:36:16.120878 2903914 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:36:16.121066 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:16.121075 2903914 machine.go:96] duration metric: took 1.144158288s to provisionDockerMachine
	I0923 11:36:16.121082 2903914 client.go:171] duration metric: took 8.641266117s to LocalClient.Create
	I0923 11:36:16.121105 2903914 start.go:167] duration metric: took 8.641338888s to libmachine.API.Create "addons-348379"
	I0923 11:36:16.121117 2903914 start.go:293] postStartSetup for "addons-348379" (driver="docker")
	I0923 11:36:16.121127 2903914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:36:16.121180 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:36:16.121219 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.140164 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.237274 2903914 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:36:16.240688 2903914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:36:16.240726 2903914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:36:16.240751 2903914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:36:16.240759 2903914 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:36:16.240772 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 11:36:16.240845 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 11:36:16.240872 2903914 start.go:296] duration metric: took 119.748923ms for postStartSetup
	I0923 11:36:16.241197 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.257321 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:36:16.257608 2903914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:36:16.257659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.273475 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.364420 2903914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:36:16.369417 2903914 start.go:128] duration metric: took 8.892040374s to createHost
	I0923 11:36:16.369444 2903914 start.go:83] releasing machines lock for "addons-348379", held for 8.892189913s
	I0923 11:36:16.369525 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.386496 2903914 ssh_runner.go:195] Run: cat /version.json
	I0923 11:36:16.386558 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.386844 2903914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:36:16.386924 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.402767 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.407504 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.620856 2903914 ssh_runner.go:195] Run: systemctl --version
	I0923 11:36:16.625351 2903914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:36:16.629494 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 11:36:16.656196 2903914 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 11:36:16.656273 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:36:16.685634 2903914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 11:36:16.685658 2903914 start.go:495] detecting cgroup driver to use...
	I0923 11:36:16.685694 2903914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:36:16.685752 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:36:16.698438 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:36:16.709958 2903914 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:36:16.710048 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:36:16.723912 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:36:16.738695 2903914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:36:16.833978 2903914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:36:16.926860 2903914 docker.go:233] disabling docker service ...
	I0923 11:36:16.926964 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:36:16.947106 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:36:16.959548 2903914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:36:17.053558 2903914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:36:17.135034 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:36:17.146613 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:36:17.163902 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:36:17.174292 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:36:17.185041 2903914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:36:17.185112 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:36:17.195986 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.206180 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:36:17.217391 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.228075 2903914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:36:17.237476 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:36:17.247362 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:36:17.257646 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:36:17.267821 2903914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:36:17.276866 2903914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:36:17.286512 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.359977 2903914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:36:17.486930 2903914 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 11:36:17.487093 2903914 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 11:36:17.490604 2903914 start.go:563] Will wait 60s for crictl version
	I0923 11:36:17.490709 2903914 ssh_runner.go:195] Run: which crictl
	I0923 11:36:17.494017 2903914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:36:17.529914 2903914 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 11:36:17.530001 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.553062 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.581066 2903914 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 11:36:17.583092 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:17.598970 2903914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 11:36:17.602709 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.616517 2903914 kubeadm.go:883] updating cluster {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:36:17.616637 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:17.616705 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.653561 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.653588 2903914 containerd.go:534] Images already preloaded, skipping extraction
	I0923 11:36:17.653654 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.689043 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.689069 2903914 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:36:17.689077 2903914 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 11:36:17.689170 2903914 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-348379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:36:17.689240 2903914 ssh_runner.go:195] Run: sudo crictl info
	I0923 11:36:17.725180 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:17.725207 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:17.725219 2903914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:36:17.725244 2903914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348379 NodeName:addons-348379 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:36:17.725401 2903914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-348379"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:36:17.725481 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:36:17.734617 2903914 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:36:17.734696 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:36:17.743298 2903914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 11:36:17.761798 2903914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:36:17.779190 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 11:36:17.797282 2903914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:36:17.801431 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.813470 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.904297 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:17.918201 2903914 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379 for IP: 192.168.49.2
	I0923 11:36:17.918278 2903914 certs.go:194] generating shared ca certs ...
	I0923 11:36:17.918311 2903914 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:17.918478 2903914 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 11:36:18.402482 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt ...
	I0923 11:36:18.402521 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt: {Name:mka24ad8ce2563bd38493ad3048e3b202e9928cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.403346 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key ...
	I0923 11:36:18.403367 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key: {Name:mke301cd867e18ebea9d875f8c02fb489d6a0a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.404035 2903914 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 11:36:18.591619 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt ...
	I0923 11:36:18.591652 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt: {Name:mkf29e9cf8d545d0d33d0ce8b9548c24a316f1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.591849 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key ...
	I0923 11:36:18.591862 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key: {Name:mk38a5ba119b442a98d5a1991cd20b7dc11fb378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.592490 2903914 certs.go:256] generating profile certs ...
	I0923 11:36:18.592562 2903914 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key
	I0923 11:36:18.592583 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt with IP's: []
	I0923 11:36:18.906095 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt ...
	I0923 11:36:18.906134 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: {Name:mk33102a3556c59cf025437aacb3628bfa41ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906340 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key ...
	I0923 11:36:18.906354 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key: {Name:mk1894a7120f896161f07a459fec6eb4fe11e236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906997 2903914 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb
	I0923 11:36:18.907023 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 11:36:19.176533 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb ...
	I0923 11:36:19.176571 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb: {Name:mkf3902710f18b86666bddc46eb9d246a2fd9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177433 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb ...
	I0923 11:36:19.177458 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb: {Name:mkcc729dc3699800a37a33c607924c19bb2a2d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177602 2903914 certs.go:381] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt
	I0923 11:36:19.177687 2903914 certs.go:385] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key
	I0923 11:36:19.177743 2903914 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key
	I0923 11:36:19.177760 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt with IP's: []
	I0923 11:36:19.407564 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt ...
	I0923 11:36:19.407592 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt: {Name:mk60cabd91332996a9c3d4f42fab2e735667c2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408332 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key ...
	I0923 11:36:19.408353 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key: {Name:mk59bc62a8c0559971fc4c2dcb7a472d97d949c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408565 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:36:19.408610 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:36:19.408642 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:36:19.408675 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 11:36:19.409266 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:36:19.438812 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:36:19.463399 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:36:19.487765 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:36:19.512554 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:36:19.537552 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:36:19.562531 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:36:19.587389 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:36:19.612515 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:36:19.638755 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:36:19.657011 2903914 ssh_runner.go:195] Run: openssl version
	I0923 11:36:19.662616 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:36:19.672363 2903914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675882 2903914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675950 2903914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.682948 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:36:19.692316 2903914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:36:19.695950 2903914 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:36:19.695999 2903914 kubeadm.go:392] StartCluster: {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:19.696080 2903914 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 11:36:19.696143 2903914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:36:19.737759 2903914 cri.go:89] found id: ""
	I0923 11:36:19.737855 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:36:19.746943 2903914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:36:19.756075 2903914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 11:36:19.756179 2903914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:36:19.767199 2903914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:36:19.767221 2903914 kubeadm.go:157] found existing configuration files:
	
	I0923 11:36:19.767362 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:36:19.776658 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:36:19.776748 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:36:19.785348 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:36:19.794057 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:36:19.794157 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:36:19.803152 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.812476 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:36:19.812574 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.821847 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:36:19.830557 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:36:19.830648 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:36:19.839343 2903914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 11:36:19.882483 2903914 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:36:19.882775 2903914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:36:19.901384 2903914 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 11:36:19.901507 2903914 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 11:36:19.901565 2903914 kubeadm.go:310] OS: Linux
	I0923 11:36:19.901637 2903914 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 11:36:19.901714 2903914 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 11:36:19.901776 2903914 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 11:36:19.901867 2903914 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 11:36:19.901970 2903914 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 11:36:19.902052 2903914 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 11:36:19.902117 2903914 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 11:36:19.902212 2903914 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 11:36:19.902294 2903914 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 11:36:19.967219 2903914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:36:19.967380 2903914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:36:19.967473 2903914 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:36:19.973209 2903914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:36:19.975734 2903914 out.go:235]   - Generating certificates and keys ...
	I0923 11:36:19.975831 2903914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:36:19.975904 2903914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:36:20.179743 2903914 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:36:21.047813 2903914 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:36:21.298950 2903914 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:36:22.135071 2903914 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:36:22.608576 2903914 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:36:22.609029 2903914 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:22.891026 2903914 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:36:22.891409 2903914 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:23.205606 2903914 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:36:23.489426 2903914 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:36:23.714238 2903914 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:36:23.714637 2903914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:36:23.917484 2903914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:36:24.438330 2903914 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:36:24.759712 2903914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:36:25.033943 2903914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:36:25.695483 2903914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:36:25.696139 2903914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:36:25.699152 2903914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:36:25.701227 2903914 out.go:235]   - Booting up control plane ...
	I0923 11:36:25.701329 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:36:25.702980 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:36:25.704104 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:36:25.714874 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:36:25.721041 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:36:25.721278 2903914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:36:25.822291 2903914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:36:25.822414 2903914 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:36:26.325513 2903914 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.540451ms
	I0923 11:36:26.325611 2903914 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:36:32.327784 2903914 kubeadm.go:310] [api-check] The API server is healthy after 6.002254671s
	I0923 11:36:32.349741 2903914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:36:32.363473 2903914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:36:32.390246 2903914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:36:32.390456 2903914 kubeadm.go:310] [mark-control-plane] Marking the node addons-348379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:36:32.404726 2903914 kubeadm.go:310] [bootstrap-token] Using token: 9jvvlf.nkkd2cu2r67rq0id
	I0923 11:36:32.407710 2903914 out.go:235]   - Configuring RBAC rules ...
	I0923 11:36:32.407927 2903914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:36:32.415924 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:36:32.424675 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:36:32.430654 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:36:32.435150 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:36:32.439236 2903914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:36:32.750475 2903914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:36:33.169716 2903914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:36:33.734975 2903914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:36:33.736330 2903914 kubeadm.go:310] 
	I0923 11:36:33.736408 2903914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:36:33.736419 2903914 kubeadm.go:310] 
	I0923 11:36:33.736495 2903914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:36:33.736505 2903914 kubeadm.go:310] 
	I0923 11:36:33.736531 2903914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:36:33.736593 2903914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:36:33.736647 2903914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:36:33.736656 2903914 kubeadm.go:310] 
	I0923 11:36:33.736710 2903914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:36:33.736719 2903914 kubeadm.go:310] 
	I0923 11:36:33.736766 2903914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:36:33.736774 2903914 kubeadm.go:310] 
	I0923 11:36:33.736827 2903914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:36:33.736907 2903914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:36:33.736980 2903914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:36:33.736989 2903914 kubeadm.go:310] 
	I0923 11:36:33.737074 2903914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:36:33.737166 2903914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:36:33.737175 2903914 kubeadm.go:310] 
	I0923 11:36:33.737258 2903914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737363 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 \
	I0923 11:36:33.737389 2903914 kubeadm.go:310] 	--control-plane 
	I0923 11:36:33.737397 2903914 kubeadm.go:310] 
	I0923 11:36:33.737482 2903914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:36:33.737497 2903914 kubeadm.go:310] 
	I0923 11:36:33.737577 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737677 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 
	I0923 11:36:33.741014 2903914 kubeadm.go:310] W0923 11:36:19.879102    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741352 2903914 kubeadm.go:310] W0923 11:36:19.880019    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741583 2903914 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 11:36:33.741697 2903914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:36:33.741739 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:33.741754 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:33.744827 2903914 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 11:36:33.747515 2903914 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 11:36:33.751591 2903914 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 11:36:33.751612 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 11:36:33.770489 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 11:36:34.059421 2903914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:36:34.059507 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.059553 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-348379 minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=addons-348379 minikube.k8s.io/primary=true
	I0923 11:36:34.223427 2903914 ops.go:34] apiserver oom_adj: -16
	I0923 11:36:34.223613 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.724103 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.223668 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.724171 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.224392 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.724281 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.223628 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.724422 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.224601 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.335390 2903914 kubeadm.go:1113] duration metric: took 4.275959417s to wait for elevateKubeSystemPrivileges
	I0923 11:36:38.335426 2903914 kubeadm.go:394] duration metric: took 18.639429725s to StartCluster
	I0923 11:36:38.335446 2903914 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336106 2903914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:36:38.336533 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336743 2903914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:38.336888 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:36:38.337142 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.337173 2903914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:36:38.337257 2903914 addons.go:69] Setting yakd=true in profile "addons-348379"
	I0923 11:36:38.337273 2903914 addons.go:234] Setting addon yakd=true in "addons-348379"
	I0923 11:36:38.337299 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.337814 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338086 2903914 addons.go:69] Setting inspektor-gadget=true in profile "addons-348379"
	I0923 11:36:38.338109 2903914 addons.go:234] Setting addon inspektor-gadget=true in "addons-348379"
	I0923 11:36:38.338133 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338208 2903914 addons.go:69] Setting metrics-server=true in profile "addons-348379"
	I0923 11:36:38.338232 2903914 addons.go:234] Setting addon metrics-server=true in "addons-348379"
	I0923 11:36:38.338262 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338580 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338757 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.339010 2903914 addons.go:69] Setting cloud-spanner=true in profile "addons-348379"
	I0923 11:36:38.339029 2903914 addons.go:234] Setting addon cloud-spanner=true in "addons-348379"
	I0923 11:36:38.339055 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.339531 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.343469 2903914 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-348379"
	I0923 11:36:38.343503 2903914 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-348379"
	I0923 11:36:38.345984 2903914 out.go:177] * Verifying Kubernetes components...
	I0923 11:36:38.346027 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.346500 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.348537 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:38.345645 2903914 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348379"
	I0923 11:36:38.365526 2903914 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:38.365567 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.366036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345659 2903914 addons.go:69] Setting default-storageclass=true in profile "addons-348379"
	I0923 11:36:38.373074 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348379"
	I0923 11:36:38.373503 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345667 2903914 addons.go:69] Setting gcp-auth=true in profile "addons-348379"
	I0923 11:36:38.384131 2903914 mustload.go:65] Loading cluster: addons-348379
	I0923 11:36:38.384331 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.384582 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345675 2903914 addons.go:69] Setting ingress=true in profile "addons-348379"
	I0923 11:36:38.394024 2903914 addons.go:234] Setting addon ingress=true in "addons-348379"
	I0923 11:36:38.394117 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.345679 2903914 addons.go:69] Setting ingress-dns=true in profile "addons-348379"
	I0923 11:36:38.401504 2903914 addons.go:234] Setting addon ingress-dns=true in "addons-348379"
	I0923 11:36:38.404456 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.405051 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345945 2903914 addons.go:69] Setting registry=true in profile "addons-348379"
	I0923 11:36:38.410134 2903914 addons.go:234] Setting addon registry=true in "addons-348379"
	I0923 11:36:38.410210 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.411036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345953 2903914 addons.go:69] Setting storage-provisioner=true in profile "addons-348379"
	I0923 11:36:38.345956 2903914 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-348379"
	I0923 11:36:38.345960 2903914 addons.go:69] Setting volcano=true in profile "addons-348379"
	I0923 11:36:38.345964 2903914 addons.go:69] Setting volumesnapshots=true in profile "addons-348379"
	I0923 11:36:38.411256 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.454084 2903914 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:36:38.460897 2903914 addons.go:234] Setting addon storage-provisioner=true in "addons-348379"
	I0923 11:36:38.460953 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.461441 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.478144 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:36:38.478222 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:36:38.478322 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.487269 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-348379"
	I0923 11:36:38.487639 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.517904 2903914 addons.go:234] Setting addon volcano=true in "addons-348379"
	I0923 11:36:38.517966 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.518467 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.546347 2903914 addons.go:234] Setting addon volumesnapshots=true in "addons-348379"
	I0923 11:36:38.546411 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.546987 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.575673 2903914 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:36:38.575914 2903914 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:36:38.579125 2903914 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:38.579150 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:36:38.579221 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.579592 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:36:38.579639 2903914 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:36:38.579689 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.617860 2903914 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:36:38.622150 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:36:38.622177 2903914 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:36:38.622251 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.622735 2903914 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:36:38.624431 2903914 addons.go:234] Setting addon default-storageclass=true in "addons-348379"
	I0923 11:36:38.624466 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.628278 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.632340 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.636671 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:36:38.637544 2903914 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:38.637561 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:36:38.637622 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.655509 2903914 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:36:38.662366 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:36:38.665082 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:36:38.665107 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:36:38.665177 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.670015 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:36:38.676295 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:38.676385 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:36:38.676464 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.683562 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:36:38.686396 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:38.686419 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:36:38.686489 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.712304 2903914 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-348379"
	I0923 11:36:38.712346 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.712766 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.727410 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:36:38.730115 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:36:38.735471 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 11:36:38.735719 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.757102 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.760000 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:36:38.766393 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:36:38.766549 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:36:38.769513 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:36:38.771099 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.774083 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:36:38.774459 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:38.774507 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:36:38.774615 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.780134 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 11:36:38.788013 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:36:38.790672 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:36:38.794853 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:36:38.794879 2903914 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:36:38.794943 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.799096 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:36:38.799171 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:36:38.799267 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.815113 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 11:36:38.821525 2903914 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:38.821606 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 11:36:38.822703 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.827569 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850382 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850811 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.867021 2903914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:38.867042 2903914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:36:38.867113 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.883391 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.898738 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.922806 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.939666 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.947337 2903914 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:36:38.950063 2903914 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:36:38.952600 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:38.952625 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:36:38.952695 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.965723 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.981816 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.987132 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.991897 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	W0923 11:36:39.007637 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007696 2903914 retry.go:31] will retry after 239.86918ms: ssh: handshake failed: EOF
	W0923 11:36:39.007745 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007761 2903914 retry.go:31] will retry after 150.66552ms: ssh: handshake failed: EOF
	I0923 11:36:39.023482 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.023945 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.169972 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:39.170259 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:36:39.494971 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:39.556492 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:39.608428 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:39.623138 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:36:39.623206 2903914 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:36:39.709728 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:36:39.709758 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:36:39.777118 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:39.827433 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:36:39.827460 2903914 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:36:39.863119 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:39.870165 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:36:39.870234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:36:39.883044 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:39.890345 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:39.896655 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:36:39.896731 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:36:39.967387 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:36:39.967475 2903914 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:36:40.018137 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:36:40.018231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:36:40.030206 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:36:40.030287 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:36:40.050639 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:40.058613 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.058708 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:36:40.150893 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:36:40.151005 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:36:40.242616 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:36:40.242710 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:36:40.287048 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:36:40.287084 2903914 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:36:40.486475 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:36:40.486504 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:36:40.493831 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.494920 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:36:40.494943 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:36:40.557986 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.558018 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:36:40.619083 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:40.619108 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:36:40.657629 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:36:40.657660 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:36:40.734595 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:36:40.734624 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:36:40.801552 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:36:40.801595 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:36:40.865890 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.930189 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:36:40.930231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:36:40.951956 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:36:40.951998 2903914 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:36:40.961846 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:36:40.961876 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:36:40.972100 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:41.216224 2903914 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.216250 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:36:41.260577 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:36:41.260621 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:36:41.278445 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.294750 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:36:41.294791 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:36:41.323456 2903914 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.153148456s)
	I0923 11:36:41.323525 2903914 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 11:36:41.323500 2903914 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.153503843s)
	I0923 11:36:41.324480 2903914 node_ready.go:35] waiting up to 6m0s for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330122 2903914 node_ready.go:49] node "addons-348379" has status "Ready":"True"
	I0923 11:36:41.330152 2903914 node_ready.go:38] duration metric: took 5.643428ms for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330163 2903914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:36:41.339728 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:41.543987 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:36:41.544058 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:36:41.648713 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.648741 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:36:41.746379 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:36:41.746407 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:36:41.828842 2903914 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-348379" context rescaled to 1 replicas
	I0923 11:36:41.902914 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.921919 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.42690557s)
	I0923 11:36:41.921979 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.365461937s)
	I0923 11:36:41.922006 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.313555468s)
	I0923 11:36:42.029240 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:36:42.029272 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:36:42.347806 2903914 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347883 2903914 pod_ready.go:82] duration metric: took 1.008114432s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	E0923 11:36:42.347920 2903914 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347968 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:42.549396 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:36:42.549431 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:36:42.830524 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:42.830560 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:36:43.211480 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:44.381316 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:44.884644 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.107485626s)
	I0923 11:36:45.843349 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:36:45.843447 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:45.872465 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.470866 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:36:46.702197 2903914 addons.go:234] Setting addon gcp-auth=true in "addons-348379"
	I0923 11:36:46.702270 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:46.702828 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:46.733217 2903914 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:36:46.733280 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:46.775337 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.854788 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:47.501824 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.638626321s)
	I0923 11:36:47.501900 2903914 addons.go:475] Verifying addon ingress=true in "addons-348379"
	I0923 11:36:47.504030 2903914 out.go:177] * Verifying ingress addon...
	I0923 11:36:47.506684 2903914 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:36:47.511450 2903914 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:36:47.511576 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.013380 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.544278 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.902915 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:49.026130 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.515743 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.786183 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.903059538s)
	I0923 11:36:49.786253 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.89583004s)
	I0923 11:36:49.786324 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.735598298s)
	I0923 11:36:49.786361 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.292505813s)
	I0923 11:36:49.786376 2903914 addons.go:475] Verifying addon registry=true in "addons-348379"
	I0923 11:36:49.786562 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.920643746s)
	I0923 11:36:49.786579 2903914 addons.go:475] Verifying addon metrics-server=true in "addons-348379"
	I0923 11:36:49.786620 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.814490533s)
	I0923 11:36:49.786930 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.508442064s)
	W0923 11:36:49.786964 2903914 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.786984 2903914 retry.go:31] will retry after 231.122068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.787069 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.884124931s)
	I0923 11:36:49.788510 2903914 out.go:177] * Verifying registry addon...
	I0923 11:36:49.789825 2903914 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-348379 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:36:49.793761 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:36:49.857501 2903914 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:36:49.857529 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.019003 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:50.091953 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.88041396s)
	I0923 11:36:50.091996 2903914 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:50.092188 2903914 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.35893014s)
	I0923 11:36:50.094620 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:50.094707 2903914 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:36:50.096428 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.099435 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:36:50.101290 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:36:50.102965 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:36:50.103035 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:36:50.192143 2903914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:36:50.192217 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.253203 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:36:50.253267 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:36:50.330971 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.356163 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.356234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:36:50.430758 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.512255 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.604722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.797707 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.023077 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.104450 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.297470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.356586 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:51.511733 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.617197 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.799992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.872269 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853212545s)
	I0923 11:36:51.872449 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441621639s)
	I0923 11:36:51.875653 2903914 addons.go:475] Verifying addon gcp-auth=true in "addons-348379"
	I0923 11:36:51.880701 2903914 out.go:177] * Verifying gcp-auth addon...
	I0923 11:36:51.886238 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:36:51.899994 2903914 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:36:52.012221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.113517 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.299559 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.512212 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.605423 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.799422 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.854916 2903914 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.854943 2903914 pod_ready.go:82] duration metric: took 10.506940522s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.854956 2903914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861027 2903914 pod_ready.go:93] pod "etcd-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.861059 2903914 pod_ready.go:82] duration metric: took 6.063045ms for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861112 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867900 2903914 pod_ready.go:93] pod "kube-apiserver-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.867934 2903914 pod_ready.go:82] duration metric: took 6.806328ms for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867947 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875428 2903914 pod_ready.go:93] pod "kube-controller-manager-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.875465 2903914 pod_ready.go:82] duration metric: took 7.477644ms for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875477 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881487 2903914 pod_ready.go:93] pod "kube-proxy-nqbmm" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.881527 2903914 pod_ready.go:82] duration metric: took 6.024203ms for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881558 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.013082 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.115359 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.261367 2903914 pod_ready.go:93] pod "kube-scheduler-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:53.261440 2903914 pod_ready.go:82] duration metric: took 379.865643ms for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.261468 2903914 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.298690 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:53.511926 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.605400 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.806273 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.013143 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.105281 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.297943 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.511688 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.604819 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.801146 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.012597 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.104977 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.267783 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:55.297710 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.513110 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.604704 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.797620 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.013461 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.105679 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.298722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.511968 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.607325 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.804470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.011592 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.104763 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.298808 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.511851 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.612444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.768189 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:57.797413 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.012279 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.104746 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.298054 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.511885 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.604675 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.801111 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.014365 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.115071 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.299589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.511922 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.604297 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.768227 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:59.798277 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.038178 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.105635 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.333590 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.512357 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.604689 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.799851 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.011394 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.105032 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.310842 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.511596 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.605046 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.768750 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:01.798428 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.013963 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.113737 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.297934 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.511024 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.605393 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.798053 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.012529 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.104923 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.298010 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.512557 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.604282 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.768828 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:03.798455 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.015940 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.104743 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.299059 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.512309 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.606024 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.798339 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.019138 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.104829 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.298375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.511973 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.604665 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.767100 2903914 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:05.767130 2903914 pod_ready.go:82] duration metric: took 12.505640545s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:05.767142 2903914 pod_ready.go:39] duration metric: took 24.436967089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:05.767156 2903914 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:05.767223 2903914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:05.781985 2903914 api_server.go:72] duration metric: took 27.445202984s to wait for apiserver process to appear ...
	I0923 11:37:05.782061 2903914 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:05.782092 2903914 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 11:37:05.789746 2903914 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 11:37:05.790754 2903914 api_server.go:141] control plane version: v1.31.1
	I0923 11:37:05.790781 2903914 api_server.go:131] duration metric: took 8.705461ms to wait for apiserver health ...
	I0923 11:37:05.790793 2903914 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:05.800773 2903914 system_pods.go:59] 18 kube-system pods found
	I0923 11:37:05.800813 2903914 system_pods.go:61] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.800824 2903914 system_pods.go:61] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.800829 2903914 system_pods.go:61] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.800839 2903914 system_pods.go:61] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.800844 2903914 system_pods.go:61] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.800848 2903914 system_pods.go:61] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.800852 2903914 system_pods.go:61] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.800856 2903914 system_pods.go:61] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.800860 2903914 system_pods.go:61] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.800866 2903914 system_pods.go:61] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.800870 2903914 system_pods.go:61] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.800875 2903914 system_pods.go:61] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.800884 2903914 system_pods.go:61] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.800892 2903914 system_pods.go:61] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.800908 2903914 system_pods.go:61] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.800916 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800924 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800931 2903914 system_pods.go:61] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.800938 2903914 system_pods.go:74] duration metric: took 10.139277ms to wait for pod list to return data ...
	I0923 11:37:05.800948 2903914 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:37:05.802797 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.803949 2903914 default_sa.go:45] found service account: "default"
	I0923 11:37:05.803977 2903914 default_sa.go:55] duration metric: took 3.018472ms for default service account to be created ...
	I0923 11:37:05.803986 2903914 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:37:05.813725 2903914 system_pods.go:86] 18 kube-system pods found
	I0923 11:37:05.813761 2903914 system_pods.go:89] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.813771 2903914 system_pods.go:89] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.813777 2903914 system_pods.go:89] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.813785 2903914 system_pods.go:89] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.813789 2903914 system_pods.go:89] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.813793 2903914 system_pods.go:89] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.813798 2903914 system_pods.go:89] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.813810 2903914 system_pods.go:89] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.813815 2903914 system_pods.go:89] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.813824 2903914 system_pods.go:89] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.813828 2903914 system_pods.go:89] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.813835 2903914 system_pods.go:89] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.813845 2903914 system_pods.go:89] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.813851 2903914 system_pods.go:89] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.813859 2903914 system_pods.go:89] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.813866 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813877 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813881 2903914 system_pods.go:89] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.813889 2903914 system_pods.go:126] duration metric: took 9.896579ms to wait for k8s-apps to be running ...
	I0923 11:37:05.813904 2903914 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:37:05.813964 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:37:05.826195 2903914 system_svc.go:56] duration metric: took 12.281298ms WaitForService to wait for kubelet
	I0923 11:37:05.826224 2903914 kubeadm.go:582] duration metric: took 27.489446441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:37:05.826246 2903914 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:05.829405 2903914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 11:37:05.829438 2903914 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:05.829451 2903914 node_conditions.go:105] duration metric: took 3.199247ms to run NodePressure ...
	I0923 11:37:05.829481 2903914 start.go:241] waiting for startup goroutines ...
	I0923 11:37:06.016364 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.105089 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.297935 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:06.512579 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.605014 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.797841 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.011801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.104294 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.298088 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.511664 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.604893 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.801940 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.013764 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.105345 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.299766 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.512208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.605989 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.797920 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.013457 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.113285 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.298479 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.519639 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.621187 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.798092 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.023070 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.120870 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.299336 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.511998 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.604502 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.798656 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.011605 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.104357 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.297872 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.510891 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.605182 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.798112 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.016742 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.106392 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.302890 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.514134 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.606249 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.802344 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.012829 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.106630 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.297952 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.520400 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.609375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.799034 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.014344 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.118058 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.298788 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.510806 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.631742 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.797720 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.019826 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.106226 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.298830 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.511804 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.605411 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.798246 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.012028 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.104868 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.297740 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.524665 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.625850 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.797603 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.011338 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.103818 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.298307 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.512577 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.605193 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.798142 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.013543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.105035 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.297589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.512084 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.605341 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.798244 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.012543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.113664 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.298444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.511599 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.606804 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.798859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.017663 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.106755 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.297564 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.512160 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.604958 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.797812 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.013732 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.105616 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.298921 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.510795 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.604820 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.797800 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.012174 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.114441 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.298237 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.513158 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.625275 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.799161 2903914 kapi.go:107] duration metric: took 33.005400732s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:37:23.013096 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.105126 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:23.511910 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.613223 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.014075 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.105028 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.511000 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.605248 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.014495 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.111460 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.512685 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.612274 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.013492 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.105721 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.511127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.605859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.012240 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.105210 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.511957 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.604964 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.012182 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.104813 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.511895 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.605303 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.013053 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.104426 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.511346 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.603992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.014157 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.118062 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.511690 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.604969 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.014750 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.105518 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.511111 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.605002 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.012362 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.104960 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.511542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.604045 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.013753 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.104350 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.512293 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.614483 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:34.016127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:34.104734 2903914 kapi.go:107] duration metric: took 44.005304527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:37:34.510755 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.019903 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.510951 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.014603 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.510966 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.016710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.511532 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.013339 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.511853 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.012335 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.511569 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.013791 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.511129 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.011102 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.510793 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.015629 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.512029 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.011711 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.510801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.011721 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.511165 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.039643 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.511176 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.011883 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.511761 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.011001 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.511048 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.012925 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.511632 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.010792 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.511167 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.018047 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.511473 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.013208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.511710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.011604 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.512230 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.011680 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.511429 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.012725 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.511977 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.013759 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.512155 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.012542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.512221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:57.011684 2903914 kapi.go:107] duration metric: took 1m9.505004156s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:42:51.886491 2903914 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 11:42:51.886551 2903914 kapi.go:107] duration metric: took 6m0.000318633s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 11:42:51.886644 2903914 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 11:42:51.888644 2903914 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0923 11:42:51.890515 2903914 addons.go:510] duration metric: took 6m13.553319939s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress]
	I0923 11:42:51.890575 2903914 start.go:246] waiting for cluster config update ...
	I0923 11:42:51.890600 2903914 start.go:255] writing updated cluster config ...
	I0923 11:42:51.890918 2903914 ssh_runner.go:195] Run: rm -f paused
	I0923 11:42:52.246444 2903914 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:42:52.248672 2903914 out.go:177] * Done! kubectl is now configured to use "addons-348379" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5abcda5fe6ab9       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   8                   1161771f95540       gadget-xl5qc
	052af9fc2b4c1       289a818c8d9c5       18 minutes ago      Running             controller                               0                   47eb6c64c7a7d       ingress-nginx-controller-bc57996ff-6wq45
	967992dcbaf35       ee6d597e62dc8       18 minutes ago      Running             csi-snapshotter                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c27f769c6bce0       642ded511e141       18 minutes ago      Running             csi-provisioner                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	0faa259c6b420       922312104da8a       18 minutes ago      Running             liveness-probe                           0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	994b2f0a53275       08f6b2990811a       18 minutes ago      Running             hostpath                                 0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	ccaa6826447e2       9a80d518f102c       18 minutes ago      Running             csi-attacher                             0                   3c219582bf398       csi-hostpath-attacher-0
	63bc3e7f0cc00       420193b27261a       18 minutes ago      Exited              patch                                    0                   6fd7a092c3487       ingress-nginx-admission-patch-bfw56
	c5db857da31e6       4d1e5c3e97420       18 minutes ago      Running             volume-snapshot-controller               0                   311300b7a364c       snapshot-controller-56fcc65765-d47ng
	d1ac50a6261dd       420193b27261a       18 minutes ago      Exited              create                                   0                   895d61d9c16ad       ingress-nginx-admission-create-fwt6v
	ac4d50ae15f93       4d1e5c3e97420       18 minutes ago      Running             volume-snapshot-controller               0                   698a149c7f280       snapshot-controller-56fcc65765-dchr7
	ed38cab12122b       0107d56dbc0be       18 minutes ago      Running             node-driver-registrar                    0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	6781c22ae4a4e       5548a49bb60ba       18 minutes ago      Running             metrics-server                           0                   ffa74d42d59a0       metrics-server-84c5f94fbc-dgpbq
	5e5a8e69b34fd       7ce2150c8929b       18 minutes ago      Running             local-path-provisioner                   0                   9bc0d2214e2a7       local-path-provisioner-86d989889c-h5pl9
	cecddb85ce0f0       487fa743e1e22       19 minutes ago      Running             csi-resizer                              0                   85d2097f2da83       csi-hostpath-resizer-0
	a6cc88c765de7       1461903ec4fe9       19 minutes ago      Running             csi-external-health-monitor-controller   0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c37000ef28652       35508c2f890c4       19 minutes ago      Running             minikube-ingress-dns                     0                   d881770814e2f       kube-ingress-dns-minikube
	497f8c41b274e       2f6c962e7b831       19 minutes ago      Running             coredns                                  0                   d2d48d489a636       coredns-7c65d6cfc9-ppz9h
	b05bd4b18e280       ba04bb24b9575       19 minutes ago      Running             storage-provisioner                      0                   e816c2d6b5461       storage-provisioner
	a7e55167b7b39       6a23fa8fd2b78       19 minutes ago      Running             kindnet-cni                              0                   9389ee1ac67f5       kindnet-4kcdh
	005547c4c4723       24a140c548c07       19 minutes ago      Running             kube-proxy                               0                   a4defdda67173       kube-proxy-nqbmm
	9255b7a6f4a59       7f8aa378bb47d       19 minutes ago      Running             kube-scheduler                           0                   cae78edec7e41       kube-scheduler-addons-348379
	92413a7a8d6f6       279f381cb3736       19 minutes ago      Running             kube-controller-manager                  0                   731c7b79dc027       kube-controller-manager-addons-348379
	20e5f68e09619       d3f53a98c0a9d       19 minutes ago      Running             kube-apiserver                           0                   bdcb66b095a48       kube-apiserver-addons-348379
	8abc59946512a       27e3830e14027       19 minutes ago      Running             etcd                                     0                   a8c7d184a28f5       etcd-addons-348379
	
	
	==> containerd <==
	Sep 23 11:53:14 addons-348379 containerd[816]: time="2024-09-23T11:53:14.130876238Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:53:14 addons-348379 containerd[816]: time="2024-09-23T11:53:14.189812360Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-scheduler/manifests/sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: 401 Unauthorized"
	Sep 23 11:53:14 addons-348379 containerd[816]: time="2024-09-23T11:53:14.189929365Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-scheduler@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882: active requests=0, bytes read=0"
	Sep 23 11:53:18 addons-348379 containerd[816]: time="2024-09-23T11:53:18.117304728Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\""
	Sep 23 11:53:18 addons-348379 containerd[816]: time="2024-09-23T11:53:18.119466491Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:53:18 addons-348379 containerd[816]: time="2024-09-23T11:53:18.182833569Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-controller-manager/manifests/sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: 401 Unauthorized"
	Sep 23 11:53:18 addons-348379 containerd[816]: time="2024-09-23T11:53:18.182948612Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-controller-manager@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de: active requests=0, bytes read=0"
	Sep 23 11:53:27 addons-348379 containerd[816]: time="2024-09-23T11:53:27.117963502Z" level=info msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\""
	Sep 23 11:53:27 addons-348379 containerd[816]: time="2024-09-23T11:53:27.120076207Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 23 11:53:27 addons-348379 containerd[816]: time="2024-09-23T11:53:27.179328553Z" level=error msg="PullImage \"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\" failed" error="failed to pull and unpack image \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": failed to resolve reference \"docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\": unexpected status from HEAD request to https://registry-1.docker.io/v2/docker.io/volcanosh/vc-webhook-manager/manifests/sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: 401 Unauthorized"
	Sep 23 11:53:27 addons-348379 containerd[816]: time="2024-09-23T11:53:27.179404779Z" level=info msg="stop pulling image docker.io/docker.io/volcanosh/vc-webhook-manager@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e: active requests=0, bytes read=0"
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.117832031Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.234471928Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.236283095Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.240033036Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 122.15024ms"
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.240078189Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.242040346Z" level=info msg="CreateContainer within sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" for container &ContainerMetadata{Name:gadget,Attempt:8,}"
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.263551088Z" level=info msg="CreateContainer within sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" for &ContainerMetadata{Name:gadget,Attempt:8,} returns container id \"5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e\""
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.264166486Z" level=info msg="StartContainer for \"5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e\""
	Sep 23 11:53:28 addons-348379 containerd[816]: time="2024-09-23T11:53:28.316519203Z" level=info msg="StartContainer for \"5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e\" returns successfully"
	Sep 23 11:53:29 addons-348379 containerd[816]: time="2024-09-23T11:53:29.835435509Z" level=info msg="shim disconnected" id=5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e namespace=k8s.io
	Sep 23 11:53:29 addons-348379 containerd[816]: time="2024-09-23T11:53:29.835966896Z" level=warning msg="cleaning up after shim disconnected" id=5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e namespace=k8s.io
	Sep 23 11:53:29 addons-348379 containerd[816]: time="2024-09-23T11:53:29.836051818Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 11:53:30 addons-348379 containerd[816]: time="2024-09-23T11:53:30.311539670Z" level=info msg="RemoveContainer for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\""
	Sep 23 11:53:30 addons-348379 containerd[816]: time="2024-09-23T11:53:30.318780459Z" level=info msg="RemoveContainer for \"b6876ad1ce80c6abfc54049fde20e926db9153a093b099c97f352a398aaa63dd\" returns successfully"
	
	
	==> coredns [497f8c41b274e14c72d9933f13fac36b6c4acf8def62c9f3205823623e58226d] <==
	[INFO] 10.244.0.8:44197 - 54267 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180151s
	[INFO] 10.244.0.8:41305 - 6638 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001704017s
	[INFO] 10.244.0.8:41305 - 35565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141999s
	[INFO] 10.244.0.8:40738 - 27223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080918s
	[INFO] 10.244.0.8:40738 - 10324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129353s
	[INFO] 10.244.0.8:56286 - 60697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080098s
	[INFO] 10.244.0.8:56286 - 24093 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043076s
	[INFO] 10.244.0.8:43089 - 14336 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004978s
	[INFO] 10.244.0.8:43089 - 63494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035692s
	[INFO] 10.244.0.8:57209 - 23569 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041272s
	[INFO] 10.244.0.8:57209 - 18191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041304s
	[INFO] 10.244.0.8:58538 - 8344 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004349043s
	[INFO] 10.244.0.8:58538 - 22686 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003876717s
	[INFO] 10.244.0.8:48601 - 29559 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050166s
	[INFO] 10.244.0.8:48601 - 628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054556s
	[INFO] 10.244.0.8:43467 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093161s
	[INFO] 10.244.0.8:43467 - 32245 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042002s
	[INFO] 10.244.0.8:54486 - 7532 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066798s
	[INFO] 10.244.0.8:54486 - 50287 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038416s
	[INFO] 10.244.0.8:33247 - 25963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052963s
	[INFO] 10.244.0.8:33247 - 9581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
	[INFO] 10.244.0.8:42814 - 61299 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001881616s
	[INFO] 10.244.0.8:42814 - 6802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001453827s
	[INFO] 10.244.0.8:59224 - 13301 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046384s
	[INFO] 10.244.0.8:59224 - 30455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035372s
	
	
	==> describe nodes <==
	Name:               addons-348379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-348379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=addons-348379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-348379
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-348379"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:36:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-348379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:56:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-348379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb92b9be866447b6af2ef85f12013e1b
	  System UUID:                84447a6a-e17a-42ff-ba46-fb82e93bc172
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-xl5qc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6wq45    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-ppz9h                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpathplugin-zdwf8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-addons-348379                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-4kcdh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-addons-348379                250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-addons-348379       200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-nqbmm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-addons-348379                100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-84c5f94fbc-dgpbq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 snapshot-controller-56fcc65765-d47ng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-56fcc65765-dchr7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  local-path-storage          local-path-provisioner-86d989889c-h5pl9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-admission-7f54bd7598-s85bg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-admission-init-f2bhm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-controllers-5ff7c5d4db-w658s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  volcano-system              volcano-scheduler-79dc4b78bb-2vx88          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x7 over 19m)  kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                node-controller  Node addons-348379 event: Registered Node addons-348379 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [8abc59946512ab5ab4d902de194545051c7e577bd5196212bf85326ca705cd43] <==
	{"level":"info","ts":"2024-09-23T11:36:27.823382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T11:36:27.823433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.825628Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-348379 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:36:27.825777Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827732Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.831921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.856413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.847318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:36:27.847973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.857537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.856653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:28.587946Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1709}
	{"level":"info","ts":"2024-09-23T11:46:28.663638Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1709,"took":"75.159252ms","hash":39109330,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4308992,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T11:46:28.663685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":39109330,"revision":1709,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T11:51:28.595181Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2154}
	{"level":"info","ts":"2024-09-23T11:51:28.614611Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2154,"took":"18.702318ms","hash":1848337147,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":3706880,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-09-23T11:51:28.614664Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1848337147,"revision":2154,"compact-revision":1709}
	
	
	==> kernel <==
	 11:56:07 up 1 day, 19:38,  0 users,  load average: 0.10, 0.27, 1.04
	Linux addons-348379 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a7e55167b7b39430011e4998f44d30eaec7edc15414dc232724161a09a27e599] <==
	I0923 11:54:00.317562       1 main.go:299] handling current node
	I0923 11:54:10.312983       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:54:10.313016       1 main.go:299] handling current node
	I0923 11:54:20.315819       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:54:20.315855       1 main.go:299] handling current node
	I0923 11:54:30.312881       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:54:30.312928       1 main.go:299] handling current node
	I0923 11:54:40.312436       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:54:40.312481       1 main.go:299] handling current node
	I0923 11:54:50.312935       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:54:50.312967       1 main.go:299] handling current node
	I0923 11:55:00.321118       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:55:00.321165       1 main.go:299] handling current node
	I0923 11:55:10.320891       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:55:10.320933       1 main.go:299] handling current node
	I0923 11:55:20.312154       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:55:20.312191       1 main.go:299] handling current node
	I0923 11:55:30.312129       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:55:30.312163       1 main.go:299] handling current node
	I0923 11:55:40.312801       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:55:40.312838       1 main.go:299] handling current node
	I0923 11:55:50.312133       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:55:50.312172       1 main.go:299] handling current node
	I0923 11:56:00.320955       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:56:00.321196       1 main.go:299] handling current node
	
	
	==> kube-apiserver [20e5f68e09619b9d622e831b0c429aae0e245dfb5c647d9e6fd9193c6cdfedac] <==
	W0923 11:52:54.951805       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:52:56.015071       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:52:56.015112       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:52:56.017027       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:53:54.848789       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:53:54.848838       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:53:54.850863       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:53:54.958413       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:53:54.958459       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:53:54.960188       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:54:54.860093       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:54:54.860139       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:54:54.861783       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:54:54.967039       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:54:54.967079       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:54:54.968906       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:55:39.865095       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:55:39.865137       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:55:39.867161       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:55:54.870937       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:55:54.870980       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:55:54.872656       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:55:54.974780       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:55:54.974823       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:55:54.976569       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	
	
	==> kube-controller-manager [92413a7a8d6f6d058e961a0759535062b02d7d50e5012e81614e6290ce3465b3] <==
	E0923 11:52:54.952505       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:52:54.953733       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:52:56.017847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="8.51678ms"
	E0923 11:52:56.017889       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-7b5c95b59d\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:53:24.215667       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-348379"
	I0923 11:53:29.131995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="65.558µs"
	I0923 11:53:32.137269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="61.472µs"
	I0923 11:53:39.131735       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	I0923 11:53:41.130257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-scheduler-79dc4b78bb" duration="47.155µs"
	I0923 11:53:48.128833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="volcano-system/volcano-controllers-5ff7c5d4db" duration="46.121µs"
	I0923 11:53:53.132172       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="1s"
	E0923 11:53:54.851687       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:53:54.853641       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:53:54.961011       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:53:54.962378       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:54:54.862641       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:54:54.863882       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:54:54.969678       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:54:54.971014       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	I0923 11:55:39.867885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="7.592689ms"
	E0923 11:55:39.867926       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-7b5c95b59d\" failed with Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:55:54.873256       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:55:54.874486       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:55:54.977367       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:55:54.978662       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-proxy [005547c4c4723cd6ce4dac939ebab2b2d89e428b99ec971a179497842dcb5abe] <==
	I0923 11:36:39.711887       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:36:39.808570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:36:39.808647       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:36:39.869255       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:36:39.869320       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:36:39.872284       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:36:39.872851       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:36:39.872865       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:36:39.885041       1 config.go:199] "Starting service config controller"
	I0923 11:36:39.885075       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:36:39.885100       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:36:39.885105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:36:39.886870       1 config.go:328] "Starting node config controller"
	I0923 11:36:39.886882       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:36:39.986150       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:36:39.986235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:36:39.987256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9255b7a6f4a590493be04a5bbbdf14b0efddfe1f321b8a25d2eed1055c6741df] <==
	W0923 11:36:31.587742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:36:31.587843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:36:31.587943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:36:31.588052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:36:31.588221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.588380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:36:31.588540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:36:31.588655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:36:31.588819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:36:31.588921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:36:31.589185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:36:32.574074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:55:10 addons-348379 kubelet[1462]: E0923 11:55:10.116860    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:55:11 addons-348379 kubelet[1462]: E0923 11:55:11.116937    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:55:12 addons-348379 kubelet[1462]: E0923 11:55:12.117476    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:55:12 addons-348379 kubelet[1462]: E0923 11:55:12.814606    1462 secret.go:188] Couldn't get secret volcano-system/volcano-admission-secret: secret "volcano-admission-secret" not found
	Sep 23 11:55:12 addons-348379 kubelet[1462]: E0923 11:55:12.814721    1462 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b71580f-19db-4c01-80d4-e477def83079-admission-certs podName:0b71580f-19db-4c01-80d4-e477def83079 nodeName:}" failed. No retries permitted until 2024-09-23 11:57:14.814701255 +0000 UTC m=+1241.838091523 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "admission-certs" (UniqueName: "kubernetes.io/secret/0b71580f-19db-4c01-80d4-e477def83079-admission-certs") pod "volcano-admission-7f54bd7598-s85bg" (UID: "0b71580f-19db-4c01-80d4-e477def83079") : secret "volcano-admission-secret" not found
	Sep 23 11:55:20 addons-348379 kubelet[1462]: I0923 11:55:20.116187    1462 scope.go:117] "RemoveContainer" containerID="5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e"
	Sep 23 11:55:20 addons-348379 kubelet[1462]: E0923 11:55:20.116408    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:55:22 addons-348379 kubelet[1462]: E0923 11:55:22.117193    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:55:25 addons-348379 kubelet[1462]: E0923 11:55:25.117041    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:55:26 addons-348379 kubelet[1462]: E0923 11:55:26.116721    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:55:31 addons-348379 kubelet[1462]: I0923 11:55:31.116834    1462 scope.go:117] "RemoveContainer" containerID="5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e"
	Sep 23 11:55:31 addons-348379 kubelet[1462]: E0923 11:55:31.117023    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:55:34 addons-348379 kubelet[1462]: E0923 11:55:34.117637    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:55:36 addons-348379 kubelet[1462]: E0923 11:55:36.117627    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:55:38 addons-348379 kubelet[1462]: E0923 11:55:38.116997    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:55:43 addons-348379 kubelet[1462]: I0923 11:55:43.116657    1462 scope.go:117] "RemoveContainer" containerID="5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e"
	Sep 23 11:55:43 addons-348379 kubelet[1462]: E0923 11:55:43.116850    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:55:47 addons-348379 kubelet[1462]: E0923 11:55:47.117726    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:55:50 addons-348379 kubelet[1462]: E0923 11:55:50.117802    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:55:50 addons-348379 kubelet[1462]: E0923 11:55:50.117816    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:55:57 addons-348379 kubelet[1462]: I0923 11:55:57.116042    1462 scope.go:117] "RemoveContainer" containerID="5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e"
	Sep 23 11:55:57 addons-348379 kubelet[1462]: E0923 11:55:57.116736    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xl5qc_gadget(41dc3424-c623-4804-93de-616b2916d6ed)\"" pod="gadget/gadget-xl5qc" podUID="41dc3424-c623-4804-93de-616b2916d6ed"
	Sep 23 11:56:01 addons-348379 kubelet[1462]: E0923 11:56:01.117339    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:56:02 addons-348379 kubelet[1462]: E0923 11:56:02.117061    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:56:03 addons-348379 kubelet[1462]: E0923 11:56:03.117788    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	
	
	==> storage-provisioner [b05bd4b18e2804c1706af6011e03e349f643f23d7f968ca74ffb0f2eaf78047d] <==
	I0923 11:36:43.550112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:36:43.591903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:36:43.592066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:36:43.604224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:36:43.604420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	I0923 11:36:43.605459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdf468f-763c-4a54-98e3-d90ea0e2e8e5", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348379_e926b365-1f0f-4822-899e-75d077991921 became leader
	I0923 11:36:43.707026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
helpers_test.go:261: (dbg) Run:  kubectl --context addons-348379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1 (98.79465ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fwt6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bfw56" not found
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-s85bg" not found
	Error from server (NotFound): pods "volcano-admission-init-f2bhm" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-w658s" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-2vx88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1
--- FAIL: TestAddons/parallel/CSI (374.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (483.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-348379 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-348379 --alsologtostderr -v=1: (1.056587878s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:329: TestAddons/parallel/Headlamp: WARNING: pod list for "headlamp" "app.kubernetes.io/name=headlamp" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:773: ***** TestAddons/parallel/Headlamp: pod "app.kubernetes.io/name=headlamp" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:773: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
addons_test.go:773: TestAddons/parallel/Headlamp: showing logs for failed pods as of 2024-09-23 11:58:12.474411635 +0000 UTC m=+1358.984458982
addons_test.go:774: failed waiting for headlamp pod: app.kubernetes.io/name=headlamp within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-348379
helpers_test.go:235: (dbg) docker inspect addons-348379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751",
	        "Created": "2024-09-23T11:36:14.086186806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2904413,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:36:14.226479889Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hostname",
	        "HostsPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/hosts",
	        "LogPath": "/var/lib/docker/containers/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751/1973e07b6a14d698a0e01da9e5a3dde89034e20a39d1a708ed0d4486852c1751-json.log",
	        "Name": "/addons-348379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-348379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-348379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbf4672e3fa58fe57e5b3bda52a0a0162ed324b3254967520cb0346a6a7c9ef3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-348379",
	                "Source": "/var/lib/docker/volumes/addons-348379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-348379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-348379",
	                "name.minikube.sigs.k8s.io": "addons-348379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0be556c33c7ef40a6e70f0f396b5b7933e0b3ad3ca535a0d80fe041626578e74",
	            "SandboxKey": "/var/run/docker/netns/0be556c33c7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "41795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-348379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d5b669f6f456e67aee85da3145f607fecadf2af36a162932dd5e9bc9ffffee31",
	                    "EndpointID": "406dde2b8517c79d36b643a1ab3c5c13c7554eb0ba705342617104596090b341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-348379",
	                        "1973e07b6a14"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-348379 -n addons-348379
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 logs -n 25: (1.425246337s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-611017              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -o=json --download-only              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-423730              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017              | download-only-611017   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-423730              | download-only-423730   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | download-docker-021793               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-021793            | download-docker-021793 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | --download-only -p                   | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | binary-mirror-046209                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34157               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-046209              | binary-mirror-046209   | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| addons  | enable dashboard -p                  | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| start   | -p addons-348379 --wait=true         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| ip      | addons-348379 ip                     | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	| addons  | addons-348379 addons disable         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:49 UTC | 23 Sep 24 11:49 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | -p addons-348379                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | addons-348379                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:50 UTC | 23 Sep 24 11:50 UTC |
	|         | -p addons-348379                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-348379 addons                 | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-348379          | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	|         | addons-348379                        |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:49.628900 2903914 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:49.629020 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629030 2903914 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:49.629036 2903914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:49.629290 2903914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 11:35:49.629765 2903914 out.go:352] Setting JSON to false
	I0923 11:35:49.630698 2903914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155897,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:49.630769 2903914 start.go:139] virtualization:  
	I0923 11:35:49.632856 2903914 out.go:177] * [addons-348379] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:35:49.634664 2903914 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 11:35:49.634735 2903914 notify.go:220] Checking for updates...
	I0923 11:35:49.637703 2903914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:49.639443 2903914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:49.640961 2903914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:49.642654 2903914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 11:35:49.644154 2903914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:35:49.646005 2903914 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:49.674816 2903914 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:49.674959 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.740942 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.731429543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.741058 2903914 docker.go:318] overlay module found
	I0923 11:35:49.742922 2903914 out.go:177] * Using the docker driver based on user configuration
	I0923 11:35:49.744404 2903914 start.go:297] selected driver: docker
	I0923 11:35:49.744427 2903914 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:49.744443 2903914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:35:49.745066 2903914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:49.807015 2903914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 11:35:49.798069613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:49.807243 2903914 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:49.807490 2903914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:35:49.810585 2903914 out.go:177] * Using Docker driver with root privileges
	I0923 11:35:49.812100 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:35:49.812180 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:49.812195 2903914 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:49.812295 2903914 start.go:340] cluster config:
	{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:49.814088 2903914 out.go:177] * Starting "addons-348379" primary control-plane node in "addons-348379" cluster
	I0923 11:35:49.815629 2903914 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:49.817311 2903914 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:49.818975 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:49.819031 2903914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:49.819044 2903914 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:49.819072 2903914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:49.819129 2903914 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 11:35:49.819140 2903914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0923 11:35:49.819629 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:35:49.819663 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json: {Name:mk57bf6c9d1a024b95a9182333fb0e843fbdc049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:35:49.834226 2903914 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:49.834349 2903914 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:49.834370 2903914 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:35:49.834376 2903914 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:35:49.834383 2903914 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:35:49.834388 2903914 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 11:36:07.477009 2903914 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 11:36:07.477055 2903914 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:36:07.477102 2903914 start.go:360] acquireMachinesLock for addons-348379: {Name:mk0afc734c4276635047574670b52ff1624a597d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:36:07.477241 2903914 start.go:364] duration metric: took 114.625µs to acquireMachinesLock for "addons-348379"
	I0923 11:36:07.477273 2903914 start.go:93] Provisioning new machine with config: &{Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:07.477361 2903914 start.go:125] createHost starting for "" (driver="docker")
	I0923 11:36:07.479499 2903914 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 11:36:07.479768 2903914 start.go:159] libmachine.API.Create for "addons-348379" (driver="docker")
	I0923 11:36:07.479806 2903914 client.go:168] LocalClient.Create starting
	I0923 11:36:07.479934 2903914 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem
	I0923 11:36:07.656758 2903914 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem
	I0923 11:36:07.895005 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 11:36:07.911037 2903914 cli_runner.go:211] docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 11:36:07.911143 2903914 network_create.go:284] running [docker network inspect addons-348379] to gather additional debugging logs...
	I0923 11:36:07.911165 2903914 cli_runner.go:164] Run: docker network inspect addons-348379
	W0923 11:36:07.926743 2903914 cli_runner.go:211] docker network inspect addons-348379 returned with exit code 1
	I0923 11:36:07.926792 2903914 network_create.go:287] error running [docker network inspect addons-348379]: docker network inspect addons-348379: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-348379 not found
	I0923 11:36:07.926806 2903914 network_create.go:289] output of [docker network inspect addons-348379]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-348379 not found
	
	** /stderr **
	I0923 11:36:07.926904 2903914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:07.941390 2903914 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eb90}
	I0923 11:36:07.941437 2903914 network_create.go:124] attempt to create docker network addons-348379 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 11:36:07.941499 2903914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348379 addons-348379
	I0923 11:36:08.013253 2903914 network_create.go:108] docker network addons-348379 192.168.49.0/24 created
	I0923 11:36:08.013292 2903914 kic.go:121] calculated static IP "192.168.49.2" for the "addons-348379" container
	I0923 11:36:08.013374 2903914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 11:36:08.030080 2903914 cli_runner.go:164] Run: docker volume create addons-348379 --label name.minikube.sigs.k8s.io=addons-348379 --label created_by.minikube.sigs.k8s.io=true
	I0923 11:36:08.048023 2903914 oci.go:103] Successfully created a docker volume addons-348379
	I0923 11:36:08.048128 2903914 cli_runner.go:164] Run: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 11:36:10.056258 2903914 cli_runner.go:217] Completed: docker run --rm --name addons-348379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --entrypoint /usr/bin/test -v addons-348379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.008084592s)
	I0923 11:36:10.056295 2903914 oci.go:107] Successfully prepared a docker volume addons-348379
	I0923 11:36:10.056323 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:10.056345 2903914 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 11:36:10.056440 2903914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 11:36:14.019670 2903914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-348379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.963182399s)
	I0923 11:36:14.019706 2903914 kic.go:203] duration metric: took 3.963357873s to extract preloaded images to volume ...
	W0923 11:36:14.019879 2903914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 11:36:14.020008 2903914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 11:36:14.071499 2903914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348379 --name addons-348379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348379 --network addons-348379 --ip 192.168.49.2 --volume addons-348379:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 11:36:14.406562 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Running}}
	I0923 11:36:14.430276 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.456577 2903914 cli_runner.go:164] Run: docker exec addons-348379 stat /var/lib/dpkg/alternatives/iptables
	I0923 11:36:14.524343 2903914 oci.go:144] the created container "addons-348379" has a running status.
	I0923 11:36:14.524373 2903914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa...
	I0923 11:36:14.817157 2903914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 11:36:14.839902 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.872963 2903914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 11:36:14.872990 2903914 kic_runner.go:114] Args: [docker exec --privileged addons-348379 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 11:36:14.951457 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:14.976890 2903914 machine.go:93] provisionDockerMachine start ...
	I0923 11:36:14.977005 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.007610 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.007901 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.007913 2903914 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:36:15.203560 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.203583 2903914 ubuntu.go:169] provisioning hostname "addons-348379"
	I0923 11:36:15.203659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.229660 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.229941 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.229961 2903914 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-348379 && echo "addons-348379" | sudo tee /etc/hostname
	I0923 11:36:15.387302 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348379
	
	I0923 11:36:15.387387 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.409587 2903914 main.go:141] libmachine: Using SSH client type: native
	I0923 11:36:15.409829 2903914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 41792 <nil> <nil>}
	I0923 11:36:15.409846 2903914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-348379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348379/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-348379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:36:15.552128 2903914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:36:15.552218 2903914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 11:36:15.552276 2903914 ubuntu.go:177] setting up certificates
	I0923 11:36:15.552305 2903914 provision.go:84] configureAuth start
	I0923 11:36:15.552432 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:15.571035 2903914 provision.go:143] copyHostCerts
	I0923 11:36:15.571118 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 11:36:15.571374 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 11:36:15.571463 2903914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 11:36:15.571520 2903914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.addons-348379 san=[127.0.0.1 192.168.49.2 addons-348379 localhost minikube]
	I0923 11:36:15.936111 2903914 provision.go:177] copyRemoteCerts
	I0923 11:36:15.936188 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:36:15.936230 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:15.954080 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.048521 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:36:16.073032 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:36:16.096585 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:36:16.120809 2903914 provision.go:87] duration metric: took 568.476502ms to configureAuth
	I0923 11:36:16.120878 2903914 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:36:16.121066 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:16.121075 2903914 machine.go:96] duration metric: took 1.144158288s to provisionDockerMachine
	I0923 11:36:16.121082 2903914 client.go:171] duration metric: took 8.641266117s to LocalClient.Create
	I0923 11:36:16.121105 2903914 start.go:167] duration metric: took 8.641338888s to libmachine.API.Create "addons-348379"
	I0923 11:36:16.121117 2903914 start.go:293] postStartSetup for "addons-348379" (driver="docker")
	I0923 11:36:16.121127 2903914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:36:16.121180 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:36:16.121219 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.140164 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.237274 2903914 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:36:16.240688 2903914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:36:16.240726 2903914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:36:16.240751 2903914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:36:16.240759 2903914 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:36:16.240772 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 11:36:16.240845 2903914 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 11:36:16.240872 2903914 start.go:296] duration metric: took 119.748923ms for postStartSetup
	I0923 11:36:16.241197 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.257321 2903914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/config.json ...
	I0923 11:36:16.257608 2903914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:36:16.257659 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.273475 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.364420 2903914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:36:16.369417 2903914 start.go:128] duration metric: took 8.892040374s to createHost
	I0923 11:36:16.369444 2903914 start.go:83] releasing machines lock for "addons-348379", held for 8.892189913s
	I0923 11:36:16.369525 2903914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348379
	I0923 11:36:16.386496 2903914 ssh_runner.go:195] Run: cat /version.json
	I0923 11:36:16.386558 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.386844 2903914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:36:16.386924 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:16.402767 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.407504 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:16.620856 2903914 ssh_runner.go:195] Run: systemctl --version
	I0923 11:36:16.625351 2903914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:36:16.629494 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 11:36:16.656196 2903914 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 11:36:16.656273 2903914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:36:16.685634 2903914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 11:36:16.685658 2903914 start.go:495] detecting cgroup driver to use...
	I0923 11:36:16.685694 2903914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:36:16.685752 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:36:16.698438 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:36:16.709958 2903914 docker.go:217] disabling cri-docker service (if available) ...
	I0923 11:36:16.710048 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 11:36:16.723912 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 11:36:16.738695 2903914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 11:36:16.833978 2903914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 11:36:16.926860 2903914 docker.go:233] disabling docker service ...
	I0923 11:36:16.926964 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 11:36:16.947106 2903914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 11:36:16.959548 2903914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 11:36:17.053558 2903914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 11:36:17.135034 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 11:36:17.146613 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:36:17.163902 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:36:17.174292 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:36:17.185041 2903914 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:36:17.185112 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:36:17.195986 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.206180 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:36:17.217391 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:36:17.228075 2903914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:36:17.237476 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:36:17.247362 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:36:17.257646 2903914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:36:17.267821 2903914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:36:17.276866 2903914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:36:17.286512 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.359977 2903914 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:36:17.486930 2903914 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 11:36:17.487093 2903914 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 11:36:17.490604 2903914 start.go:563] Will wait 60s for crictl version
	I0923 11:36:17.490709 2903914 ssh_runner.go:195] Run: which crictl
	I0923 11:36:17.494017 2903914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:36:17.529914 2903914 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 11:36:17.530001 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.553062 2903914 ssh_runner.go:195] Run: containerd --version
	I0923 11:36:17.581066 2903914 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 11:36:17.583092 2903914 cli_runner.go:164] Run: docker network inspect addons-348379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 11:36:17.598970 2903914 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 11:36:17.602709 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.616517 2903914 kubeadm.go:883] updating cluster {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:36:17.616637 2903914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:36:17.616705 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.653561 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.653588 2903914 containerd.go:534] Images already preloaded, skipping extraction
	I0923 11:36:17.653654 2903914 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 11:36:17.689043 2903914 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 11:36:17.689069 2903914 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:36:17.689077 2903914 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0923 11:36:17.689170 2903914 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-348379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:36:17.689240 2903914 ssh_runner.go:195] Run: sudo crictl info
	I0923 11:36:17.725180 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:17.725207 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:17.725219 2903914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:36:17.725244 2903914 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348379 NodeName:addons-348379 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:36:17.725401 2903914 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-348379"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:36:17.725481 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:36:17.734617 2903914 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:36:17.734696 2903914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:36:17.743298 2903914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 11:36:17.761798 2903914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:36:17.779190 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0923 11:36:17.797282 2903914 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:36:17.801431 2903914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:36:17.813470 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:17.904297 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:17.918201 2903914 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379 for IP: 192.168.49.2
	I0923 11:36:17.918278 2903914 certs.go:194] generating shared ca certs ...
	I0923 11:36:17.918311 2903914 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:17.918478 2903914 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 11:36:18.402482 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt ...
	I0923 11:36:18.402521 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt: {Name:mka24ad8ce2563bd38493ad3048e3b202e9928cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.403346 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key ...
	I0923 11:36:18.403367 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key: {Name:mke301cd867e18ebea9d875f8c02fb489d6a0a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.404035 2903914 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 11:36:18.591619 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt ...
	I0923 11:36:18.591652 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt: {Name:mkf29e9cf8d545d0d33d0ce8b9548c24a316f1e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.591849 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key ...
	I0923 11:36:18.591862 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key: {Name:mk38a5ba119b442a98d5a1991cd20b7dc11fb378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.592490 2903914 certs.go:256] generating profile certs ...
	I0923 11:36:18.592562 2903914 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key
	I0923 11:36:18.592583 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt with IP's: []
	I0923 11:36:18.906095 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt ...
	I0923 11:36:18.906134 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: {Name:mk33102a3556c59cf025437aacb3628bfa41ed3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906340 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key ...
	I0923 11:36:18.906354 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.key: {Name:mk1894a7120f896161f07a459fec6eb4fe11e236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:18.906997 2903914 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb
	I0923 11:36:18.907023 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 11:36:19.176533 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb ...
	I0923 11:36:19.176571 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb: {Name:mkf3902710f18b86666bddc46eb9d246a2fd9230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177433 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb ...
	I0923 11:36:19.177458 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb: {Name:mkcc729dc3699800a37a33c607924c19bb2a2d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.177602 2903914 certs.go:381] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt
	I0923 11:36:19.177687 2903914 certs.go:385] copying /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key.a6316aeb -> /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key
	I0923 11:36:19.177743 2903914 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key
	I0923 11:36:19.177760 2903914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt with IP's: []
	I0923 11:36:19.407564 2903914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt ...
	I0923 11:36:19.407592 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt: {Name:mk60cabd91332996a9c3d4f42fab2e735667c2da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408332 2903914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key ...
	I0923 11:36:19.408353 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key: {Name:mk59bc62a8c0559971fc4c2dcb7a472d97d949c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:19.408565 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:36:19.408610 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:36:19.408642 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:36:19.408675 2903914 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 11:36:19.409266 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:36:19.438812 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:36:19.463399 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:36:19.487765 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:36:19.512554 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:36:19.537552 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:36:19.562531 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:36:19.587389 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:36:19.612515 2903914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:36:19.638755 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:36:19.657011 2903914 ssh_runner.go:195] Run: openssl version
	I0923 11:36:19.662616 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:36:19.672363 2903914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675882 2903914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.675950 2903914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:36:19.682948 2903914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:36:19.692316 2903914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:36:19.695950 2903914 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:36:19.695999 2903914 kubeadm.go:392] StartCluster: {Name:addons-348379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-348379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:36:19.696080 2903914 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 11:36:19.696143 2903914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 11:36:19.737759 2903914 cri.go:89] found id: ""
	I0923 11:36:19.737855 2903914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:36:19.746943 2903914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:36:19.756075 2903914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 11:36:19.756179 2903914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:36:19.767199 2903914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:36:19.767221 2903914 kubeadm.go:157] found existing configuration files:
	
	I0923 11:36:19.767362 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:36:19.776658 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:36:19.776748 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:36:19.785348 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:36:19.794057 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:36:19.794157 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:36:19.803152 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.812476 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:36:19.812574 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:36:19.821847 2903914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:36:19.830557 2903914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:36:19.830648 2903914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:36:19.839343 2903914 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 11:36:19.882483 2903914 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:36:19.882775 2903914 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:36:19.901384 2903914 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 11:36:19.901507 2903914 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 11:36:19.901565 2903914 kubeadm.go:310] OS: Linux
	I0923 11:36:19.901637 2903914 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 11:36:19.901714 2903914 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 11:36:19.901776 2903914 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 11:36:19.901867 2903914 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 11:36:19.901970 2903914 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 11:36:19.902052 2903914 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 11:36:19.902117 2903914 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 11:36:19.902212 2903914 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 11:36:19.902294 2903914 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 11:36:19.967219 2903914 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:36:19.967380 2903914 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:36:19.967473 2903914 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:36:19.973209 2903914 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:36:19.975734 2903914 out.go:235]   - Generating certificates and keys ...
	I0923 11:36:19.975831 2903914 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:36:19.975904 2903914 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:36:20.179743 2903914 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:36:21.047813 2903914 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:36:21.298950 2903914 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:36:22.135071 2903914 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:36:22.608576 2903914 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:36:22.609029 2903914 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:22.891026 2903914 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:36:22.891409 2903914 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-348379 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 11:36:23.205606 2903914 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:36:23.489426 2903914 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:36:23.714238 2903914 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:36:23.714637 2903914 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:36:23.917484 2903914 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:36:24.438330 2903914 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:36:24.759712 2903914 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:36:25.033943 2903914 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:36:25.695483 2903914 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:36:25.696139 2903914 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:36:25.699152 2903914 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:36:25.701227 2903914 out.go:235]   - Booting up control plane ...
	I0923 11:36:25.701329 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:36:25.702980 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:36:25.704104 2903914 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:36:25.714874 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:36:25.721041 2903914 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:36:25.721278 2903914 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:36:25.822291 2903914 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:36:25.822414 2903914 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:36:26.325513 2903914 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.540451ms
	I0923 11:36:26.325611 2903914 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:36:32.327784 2903914 kubeadm.go:310] [api-check] The API server is healthy after 6.002254671s
	I0923 11:36:32.349741 2903914 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:36:32.363473 2903914 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:36:32.390246 2903914 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:36:32.390456 2903914 kubeadm.go:310] [mark-control-plane] Marking the node addons-348379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:36:32.404726 2903914 kubeadm.go:310] [bootstrap-token] Using token: 9jvvlf.nkkd2cu2r67rq0id
	I0923 11:36:32.407710 2903914 out.go:235]   - Configuring RBAC rules ...
	I0923 11:36:32.407927 2903914 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:36:32.415924 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:36:32.424675 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:36:32.430654 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:36:32.435150 2903914 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:36:32.439236 2903914 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:36:32.750475 2903914 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:36:33.169716 2903914 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:36:33.734975 2903914 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:36:33.736330 2903914 kubeadm.go:310] 
	I0923 11:36:33.736408 2903914 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:36:33.736419 2903914 kubeadm.go:310] 
	I0923 11:36:33.736495 2903914 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:36:33.736505 2903914 kubeadm.go:310] 
	I0923 11:36:33.736531 2903914 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:36:33.736593 2903914 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:36:33.736647 2903914 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:36:33.736656 2903914 kubeadm.go:310] 
	I0923 11:36:33.736710 2903914 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:36:33.736719 2903914 kubeadm.go:310] 
	I0923 11:36:33.736766 2903914 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:36:33.736774 2903914 kubeadm.go:310] 
	I0923 11:36:33.736827 2903914 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:36:33.736907 2903914 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:36:33.736980 2903914 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:36:33.736989 2903914 kubeadm.go:310] 
	I0923 11:36:33.737074 2903914 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:36:33.737166 2903914 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:36:33.737175 2903914 kubeadm.go:310] 
	I0923 11:36:33.737258 2903914 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737363 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 \
	I0923 11:36:33.737389 2903914 kubeadm.go:310] 	--control-plane 
	I0923 11:36:33.737397 2903914 kubeadm.go:310] 
	I0923 11:36:33.737482 2903914 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:36:33.737497 2903914 kubeadm.go:310] 
	I0923 11:36:33.737577 2903914 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9jvvlf.nkkd2cu2r67rq0id \
	I0923 11:36:33.737677 2903914 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ac02d2aa360d89bf33102c2cc3695edceca639012bc17730f4b3249cca9bef37 
	I0923 11:36:33.741014 2903914 kubeadm.go:310] W0923 11:36:19.879102    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741352 2903914 kubeadm.go:310] W0923 11:36:19.880019    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:36:33.741583 2903914 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 11:36:33.741697 2903914 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:36:33.741739 2903914 cni.go:84] Creating CNI manager for ""
	I0923 11:36:33.741754 2903914 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:36:33.744827 2903914 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 11:36:33.747515 2903914 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 11:36:33.751591 2903914 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 11:36:33.751612 2903914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 11:36:33.770489 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 11:36:34.059421 2903914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:36:34.059507 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.059553 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-348379 minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e minikube.k8s.io/name=addons-348379 minikube.k8s.io/primary=true
	I0923 11:36:34.223427 2903914 ops.go:34] apiserver oom_adj: -16
	I0923 11:36:34.223613 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:34.724103 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.223668 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:35.724171 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.224392 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:36.724281 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.223628 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:37.724422 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.224601 2903914 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:36:38.335390 2903914 kubeadm.go:1113] duration metric: took 4.275959417s to wait for elevateKubeSystemPrivileges
	I0923 11:36:38.335426 2903914 kubeadm.go:394] duration metric: took 18.639429725s to StartCluster
	I0923 11:36:38.335446 2903914 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336106 2903914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:36:38.336533 2903914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:36:38.336743 2903914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 11:36:38.336888 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:36:38.337142 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.337173 2903914 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:36:38.337257 2903914 addons.go:69] Setting yakd=true in profile "addons-348379"
	I0923 11:36:38.337273 2903914 addons.go:234] Setting addon yakd=true in "addons-348379"
	I0923 11:36:38.337299 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.337814 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338086 2903914 addons.go:69] Setting inspektor-gadget=true in profile "addons-348379"
	I0923 11:36:38.338109 2903914 addons.go:234] Setting addon inspektor-gadget=true in "addons-348379"
	I0923 11:36:38.338133 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338208 2903914 addons.go:69] Setting metrics-server=true in profile "addons-348379"
	I0923 11:36:38.338232 2903914 addons.go:234] Setting addon metrics-server=true in "addons-348379"
	I0923 11:36:38.338262 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.338580 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.338757 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.339010 2903914 addons.go:69] Setting cloud-spanner=true in profile "addons-348379"
	I0923 11:36:38.339029 2903914 addons.go:234] Setting addon cloud-spanner=true in "addons-348379"
	I0923 11:36:38.339055 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.339531 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.343469 2903914 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-348379"
	I0923 11:36:38.343503 2903914 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-348379"
	I0923 11:36:38.345984 2903914 out.go:177] * Verifying Kubernetes components...
	I0923 11:36:38.346027 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.346500 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.348537 2903914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:36:38.345645 2903914 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348379"
	I0923 11:36:38.365526 2903914 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:38.365567 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.366036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345659 2903914 addons.go:69] Setting default-storageclass=true in profile "addons-348379"
	I0923 11:36:38.373074 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348379"
	I0923 11:36:38.373503 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345667 2903914 addons.go:69] Setting gcp-auth=true in profile "addons-348379"
	I0923 11:36:38.384131 2903914 mustload.go:65] Loading cluster: addons-348379
	I0923 11:36:38.384331 2903914 config.go:182] Loaded profile config "addons-348379": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 11:36:38.384582 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345675 2903914 addons.go:69] Setting ingress=true in profile "addons-348379"
	I0923 11:36:38.394024 2903914 addons.go:234] Setting addon ingress=true in "addons-348379"
	I0923 11:36:38.394117 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.345679 2903914 addons.go:69] Setting ingress-dns=true in profile "addons-348379"
	I0923 11:36:38.401504 2903914 addons.go:234] Setting addon ingress-dns=true in "addons-348379"
	I0923 11:36:38.404456 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.405051 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345945 2903914 addons.go:69] Setting registry=true in profile "addons-348379"
	I0923 11:36:38.410134 2903914 addons.go:234] Setting addon registry=true in "addons-348379"
	I0923 11:36:38.410210 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.411036 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.345953 2903914 addons.go:69] Setting storage-provisioner=true in profile "addons-348379"
	I0923 11:36:38.345956 2903914 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-348379"
	I0923 11:36:38.345960 2903914 addons.go:69] Setting volcano=true in profile "addons-348379"
	I0923 11:36:38.345964 2903914 addons.go:69] Setting volumesnapshots=true in profile "addons-348379"
	I0923 11:36:38.411256 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.454084 2903914 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:36:38.460897 2903914 addons.go:234] Setting addon storage-provisioner=true in "addons-348379"
	I0923 11:36:38.460953 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.461441 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.478144 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:36:38.478222 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:36:38.478322 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.487269 2903914 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-348379"
	I0923 11:36:38.487639 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.517904 2903914 addons.go:234] Setting addon volcano=true in "addons-348379"
	I0923 11:36:38.517966 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.518467 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.546347 2903914 addons.go:234] Setting addon volumesnapshots=true in "addons-348379"
	I0923 11:36:38.546411 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.546987 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.575673 2903914 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:36:38.575914 2903914 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:36:38.579125 2903914 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:38.579150 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:36:38.579221 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.579592 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:36:38.579639 2903914 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:36:38.579689 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.617860 2903914 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:36:38.622150 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:36:38.622177 2903914 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:36:38.622251 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.622735 2903914 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:36:38.624431 2903914 addons.go:234] Setting addon default-storageclass=true in "addons-348379"
	I0923 11:36:38.624466 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.628278 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.632340 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.636671 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:36:38.637544 2903914 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:38.637561 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:36:38.637622 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.655509 2903914 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:36:38.662366 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:36:38.665082 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:36:38.665107 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:36:38.665177 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.670015 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:36:38.676295 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:38.676385 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:36:38.676464 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.683562 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:36:38.686396 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:38.686419 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:36:38.686489 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.712304 2903914 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-348379"
	I0923 11:36:38.712346 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:38.712766 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:38.727410 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:36:38.730115 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:36:38.735471 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0923 11:36:38.735719 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.757102 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.760000 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:36:38.766393 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:36:38.766549 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:36:38.769513 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:36:38.771099 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:38.774083 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:36:38.774459 2903914 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:38.774507 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:36:38.774615 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.780134 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0923 11:36:38.788013 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:36:38.790672 2903914 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:36:38.794853 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:36:38.794879 2903914 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:36:38.794943 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.799096 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:36:38.799171 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:36:38.799267 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.815113 2903914 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0923 11:36:38.821525 2903914 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:38.821606 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471865 bytes)
	I0923 11:36:38.822703 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.827569 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850382 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.850811 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.867021 2903914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:38.867042 2903914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:36:38.867113 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.883391 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.898738 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.922806 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.939666 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.947337 2903914 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:36:38.950063 2903914 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:36:38.952600 2903914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:38.952625 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:36:38.952695 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:38.965723 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.981816 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.987132 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:38.991897 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	W0923 11:36:39.007637 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007696 2903914 retry.go:31] will retry after 239.86918ms: ssh: handshake failed: EOF
	W0923 11:36:39.007745 2903914 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 11:36:39.007761 2903914 retry.go:31] will retry after 150.66552ms: ssh: handshake failed: EOF
	I0923 11:36:39.023482 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.023945 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:39.169972 2903914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:36:39.170259 2903914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:36:39.494971 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:36:39.556492 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:36:39.608428 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:36:39.623138 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:36:39.623206 2903914 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:36:39.709728 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:36:39.709758 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:36:39.777118 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:36:39.827433 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:36:39.827460 2903914 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:36:39.863119 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:36:39.870165 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:36:39.870234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:36:39.883044 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:36:39.890345 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:36:39.896655 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:36:39.896731 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:36:39.967387 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:36:39.967475 2903914 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:36:40.018137 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:36:40.018231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:36:40.030206 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:36:40.030287 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:36:40.050639 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:36:40.058613 2903914 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.058708 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:36:40.150893 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:36:40.151005 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:36:40.242616 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:36:40.242710 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:36:40.287048 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:36:40.287084 2903914 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:36:40.486475 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:36:40.486504 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:36:40.493831 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:36:40.494920 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:36:40.494943 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:36:40.557986 2903914 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.558018 2903914 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:36:40.619083 2903914 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:40.619108 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:36:40.657629 2903914 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:36:40.657660 2903914 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:36:40.734595 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:36:40.734624 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:36:40.801552 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:36:40.801595 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:36:40.865890 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:36:40.930189 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:36:40.930231 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:36:40.951956 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:36:40.951998 2903914 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:36:40.961846 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:36:40.961876 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:36:40.972100 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:36:41.216224 2903914 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.216250 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:36:41.260577 2903914 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:36:41.260621 2903914 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:36:41.278445 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:41.294750 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:36:41.294791 2903914 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:36:41.323456 2903914 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.153148456s)
	I0923 11:36:41.323525 2903914 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 11:36:41.323500 2903914 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.153503843s)
	I0923 11:36:41.324480 2903914 node_ready.go:35] waiting up to 6m0s for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330122 2903914 node_ready.go:49] node "addons-348379" has status "Ready":"True"
	I0923 11:36:41.330152 2903914 node_ready.go:38] duration metric: took 5.643428ms for node "addons-348379" to be "Ready" ...
	I0923 11:36:41.330163 2903914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:36:41.339728 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:41.543987 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:36:41.544058 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:36:41.648713 2903914 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.648741 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:36:41.746379 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:36:41.746407 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:36:41.828842 2903914 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-348379" context rescaled to 1 replicas
	I0923 11:36:41.902914 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:36:41.921919 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.42690557s)
	I0923 11:36:41.921979 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.365461937s)
	I0923 11:36:41.922006 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.313555468s)
	I0923 11:36:42.029240 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:36:42.029272 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:36:42.347806 2903914 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347883 2903914 pod_ready.go:82] duration metric: took 1.008114432s for pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace to be "Ready" ...
	E0923 11:36:42.347920 2903914 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g6mtd" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g6mtd" not found
	I0923 11:36:42.347968 2903914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:42.549396 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:36:42.549431 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:36:42.830524 2903914 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:42.830560 2903914 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:36:43.211480 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:36:44.381316 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:44.884644 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.107485626s)
	I0923 11:36:45.843349 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:36:45.843447 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:45.872465 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.470866 2903914 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:36:46.702197 2903914 addons.go:234] Setting addon gcp-auth=true in "addons-348379"
	I0923 11:36:46.702270 2903914 host.go:66] Checking if "addons-348379" exists ...
	I0923 11:36:46.702828 2903914 cli_runner.go:164] Run: docker container inspect addons-348379 --format={{.State.Status}}
	I0923 11:36:46.733217 2903914 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:36:46.733280 2903914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348379
	I0923 11:36:46.775337 2903914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41792 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/addons-348379/id_rsa Username:docker}
	I0923 11:36:46.854788 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:47.501824 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.638626321s)
	I0923 11:36:47.501900 2903914 addons.go:475] Verifying addon ingress=true in "addons-348379"
	I0923 11:36:47.504030 2903914 out.go:177] * Verifying ingress addon...
	I0923 11:36:47.506684 2903914 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:36:47.511450 2903914 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:36:47.511576 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.013380 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.544278 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:48.902915 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:49.026130 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.515743 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:49.786183 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.903059538s)
	I0923 11:36:49.786253 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.89583004s)
	I0923 11:36:49.786324 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.735598298s)
	I0923 11:36:49.786361 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.292505813s)
	I0923 11:36:49.786376 2903914 addons.go:475] Verifying addon registry=true in "addons-348379"
	I0923 11:36:49.786562 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.920643746s)
	I0923 11:36:49.786579 2903914 addons.go:475] Verifying addon metrics-server=true in "addons-348379"
	I0923 11:36:49.786620 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.814490533s)
	I0923 11:36:49.786930 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.508442064s)
	W0923 11:36:49.786964 2903914 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.786984 2903914 retry.go:31] will retry after 231.122068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:36:49.787069 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.884124931s)
	I0923 11:36:49.788510 2903914 out.go:177] * Verifying registry addon...
	I0923 11:36:49.789825 2903914 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-348379 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:36:49.793761 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:36:49.857501 2903914 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:36:49.857529 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.019003 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:36:50.091953 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.88041396s)
	I0923 11:36:50.091996 2903914 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-348379"
	I0923 11:36:50.092188 2903914 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.35893014s)
	I0923 11:36:50.094620 2903914 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:36:50.094707 2903914 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:36:50.096428 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.099435 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:36:50.101290 2903914 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:36:50.102965 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:36:50.103035 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:36:50.192143 2903914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:36:50.192217 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.253203 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:36:50.253267 2903914 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:36:50.330971 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:50.356163 2903914 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.356234 2903914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:36:50.430758 2903914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:36:50.512255 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:50.604722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:50.797707 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.023077 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.104450 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.297470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.356586 2903914 pod_ready.go:103] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:51.511733 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:51.617197 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:51.799992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:51.872269 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.853212545s)
	I0923 11:36:51.872449 2903914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441621639s)
	I0923 11:36:51.875653 2903914 addons.go:475] Verifying addon gcp-auth=true in "addons-348379"
	I0923 11:36:51.880701 2903914 out.go:177] * Verifying gcp-auth addon...
	I0923 11:36:51.886238 2903914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:36:51.899994 2903914 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:36:52.012221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.113517 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.299559 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.512212 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:52.605423 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:52.799422 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:52.854916 2903914 pod_ready.go:93] pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.854943 2903914 pod_ready.go:82] duration metric: took 10.506940522s for pod "coredns-7c65d6cfc9-ppz9h" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.854956 2903914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861027 2903914 pod_ready.go:93] pod "etcd-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.861059 2903914 pod_ready.go:82] duration metric: took 6.063045ms for pod "etcd-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.861112 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867900 2903914 pod_ready.go:93] pod "kube-apiserver-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.867934 2903914 pod_ready.go:82] duration metric: took 6.806328ms for pod "kube-apiserver-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.867947 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875428 2903914 pod_ready.go:93] pod "kube-controller-manager-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.875465 2903914 pod_ready.go:82] duration metric: took 7.477644ms for pod "kube-controller-manager-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.875477 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881487 2903914 pod_ready.go:93] pod "kube-proxy-nqbmm" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:52.881527 2903914 pod_ready.go:82] duration metric: took 6.024203ms for pod "kube-proxy-nqbmm" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:52.881558 2903914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.013082 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.115359 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.261367 2903914 pod_ready.go:93] pod "kube-scheduler-addons-348379" in "kube-system" namespace has status "Ready":"True"
	I0923 11:36:53.261440 2903914 pod_ready.go:82] duration metric: took 379.865643ms for pod "kube-scheduler-addons-348379" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.261468 2903914 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:36:53.298690 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:53.511926 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:53.605400 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:53.806273 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.013143 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.105281 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.297943 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:54.511688 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:54.604819 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:54.801146 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.012597 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.104977 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.267783 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:55.297710 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:55.513110 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:55.604704 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:55.797620 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.013461 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.105679 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.298722 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:56.511968 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:56.607325 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:56.804470 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.011592 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.104763 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.298808 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:57.511851 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:57.612444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:57.768189 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:57.797413 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.012279 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.104746 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.298054 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:58.511885 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:58.604675 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:58.801111 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.014365 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.115071 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.299589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:36:59.511922 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:36:59.604297 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:36:59.768227 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:36:59.798277 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.038178 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.105635 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.333590 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:00.512357 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:00.604689 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:00.799851 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.011394 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.105032 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.310842 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:01.511596 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:01.605046 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:01.768750 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:01.798428 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.013963 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.113737 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.297934 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:02.511024 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:02.605393 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:02.798053 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.012529 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.104923 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.298010 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:03.512557 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:03.604282 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:03.768828 2903914 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"False"
	I0923 11:37:03.798455 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.015940 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.104743 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.299059 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:04.512309 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:04.606024 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:04.798339 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.019138 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.104829 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.298375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.511973 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:05.604665 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:05.767100 2903914 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace has status "Ready":"True"
	I0923 11:37:05.767130 2903914 pod_ready.go:82] duration metric: took 12.505640545s for pod "nvidia-device-plugin-daemonset-xqqn9" in "kube-system" namespace to be "Ready" ...
	I0923 11:37:05.767142 2903914 pod_ready.go:39] duration metric: took 24.436967089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:37:05.767156 2903914 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:37:05.767223 2903914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:37:05.781985 2903914 api_server.go:72] duration metric: took 27.445202984s to wait for apiserver process to appear ...
	I0923 11:37:05.782061 2903914 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:37:05.782092 2903914 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 11:37:05.789746 2903914 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 11:37:05.790754 2903914 api_server.go:141] control plane version: v1.31.1
	I0923 11:37:05.790781 2903914 api_server.go:131] duration metric: took 8.705461ms to wait for apiserver health ...
	I0923 11:37:05.790793 2903914 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:37:05.800773 2903914 system_pods.go:59] 18 kube-system pods found
	I0923 11:37:05.800813 2903914 system_pods.go:61] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.800824 2903914 system_pods.go:61] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.800829 2903914 system_pods.go:61] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.800839 2903914 system_pods.go:61] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.800844 2903914 system_pods.go:61] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.800848 2903914 system_pods.go:61] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.800852 2903914 system_pods.go:61] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.800856 2903914 system_pods.go:61] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.800860 2903914 system_pods.go:61] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.800866 2903914 system_pods.go:61] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.800870 2903914 system_pods.go:61] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.800875 2903914 system_pods.go:61] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.800884 2903914 system_pods.go:61] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.800892 2903914 system_pods.go:61] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.800908 2903914 system_pods.go:61] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.800916 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800924 2903914 system_pods.go:61] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.800931 2903914 system_pods.go:61] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.800938 2903914 system_pods.go:74] duration metric: took 10.139277ms to wait for pod list to return data ...
	I0923 11:37:05.800948 2903914 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:37:05.802797 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:05.803949 2903914 default_sa.go:45] found service account: "default"
	I0923 11:37:05.803977 2903914 default_sa.go:55] duration metric: took 3.018472ms for default service account to be created ...
	I0923 11:37:05.803986 2903914 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:37:05.813725 2903914 system_pods.go:86] 18 kube-system pods found
	I0923 11:37:05.813761 2903914 system_pods.go:89] "coredns-7c65d6cfc9-ppz9h" [df6d7368-6c3b-4b25-8a3f-d869da9706ef] Running
	I0923 11:37:05.813771 2903914 system_pods.go:89] "csi-hostpath-attacher-0" [6b1ea063-3b87-466f-a2fd-3dd5701e0462] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:37:05.813777 2903914 system_pods.go:89] "csi-hostpath-resizer-0" [03a7bf37-3f70-4482-9e07-8da7e44d10f8] Running
	I0923 11:37:05.813785 2903914 system_pods.go:89] "csi-hostpathplugin-zdwf8" [0e01d919-bfa0-4762-80e4-151ab70fcb25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:37:05.813789 2903914 system_pods.go:89] "etcd-addons-348379" [a0b7a23e-a57c-46e6-b1ee-1fd9b9b61f39] Running
	I0923 11:37:05.813793 2903914 system_pods.go:89] "kindnet-4kcdh" [9c1486e9-05ee-4dd1-827f-25928ce8bfab] Running
	I0923 11:37:05.813798 2903914 system_pods.go:89] "kube-apiserver-addons-348379" [cf08d3aa-855d-4e43-9278-18058aa83802] Running
	I0923 11:37:05.813810 2903914 system_pods.go:89] "kube-controller-manager-addons-348379" [2c507394-9458-4700-b37f-ab54a3e3ffd2] Running
	I0923 11:37:05.813815 2903914 system_pods.go:89] "kube-ingress-dns-minikube" [5189817c-c6f5-4bcd-9fd7-9867cc0b7a40] Running
	I0923 11:37:05.813824 2903914 system_pods.go:89] "kube-proxy-nqbmm" [2feda6b0-b4da-478a-b557-8a5f1559e17c] Running
	I0923 11:37:05.813828 2903914 system_pods.go:89] "kube-scheduler-addons-348379" [941de3db-cdcd-4466-aecb-03dda1815396] Running
	I0923 11:37:05.813835 2903914 system_pods.go:89] "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:37:05.813845 2903914 system_pods.go:89] "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
	I0923 11:37:05.813851 2903914 system_pods.go:89] "registry-66c9cd494c-fhm8g" [e86ab41a-1d3c-4fd0-8e39-126f3b789212] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:37:05.813859 2903914 system_pods.go:89] "registry-proxy-7qmf5" [172e9514-645d-4b65-8403-0862836b34c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:37:05.813866 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-d47ng" [7b9ecb53-7d47-4524-b0ec-66629a7adf6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813877 2903914 system_pods.go:89] "snapshot-controller-56fcc65765-dchr7" [31ede34d-4ccd-4d01-993c-af062382b536] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:37:05.813881 2903914 system_pods.go:89] "storage-provisioner" [19a8b37d-4eee-4889-ab52-103cca27383e] Running
	I0923 11:37:05.813889 2903914 system_pods.go:126] duration metric: took 9.896579ms to wait for k8s-apps to be running ...
	I0923 11:37:05.813904 2903914 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:37:05.813964 2903914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:37:05.826195 2903914 system_svc.go:56] duration metric: took 12.281298ms WaitForService to wait for kubelet
	I0923 11:37:05.826224 2903914 kubeadm.go:582] duration metric: took 27.489446441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:37:05.826246 2903914 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:37:05.829405 2903914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 11:37:05.829438 2903914 node_conditions.go:123] node cpu capacity is 2
	I0923 11:37:05.829451 2903914 node_conditions.go:105] duration metric: took 3.199247ms to run NodePressure ...
	I0923 11:37:05.829481 2903914 start.go:241] waiting for startup goroutines ...
	I0923 11:37:06.016364 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.105089 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.297935 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:06.512579 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:06.605014 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:06.797841 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.011801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.104294 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.298088 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:07.511664 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:07.604893 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:07.801940 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.013764 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.105345 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.299766 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:08.512208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:08.605989 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:08.797920 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.013457 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.113285 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.298479 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:09.519639 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:09.621187 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:09.798092 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.023070 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.120870 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.299336 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:10.511998 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:10.604502 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:10.798656 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.011605 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.104357 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.297872 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:11.510891 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:11.605182 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:11.798112 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.016742 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.106392 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.302890 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:12.514134 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:12.606249 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:12.802344 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.012829 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.106630 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.297952 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:13.520400 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:13.609375 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:13.799034 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.014344 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.118058 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.298788 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:14.510806 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:14.631742 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:14.797720 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.019826 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.106226 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.298830 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:15.511804 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:15.605411 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:15.798246 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.012028 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.104868 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.297740 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:16.524665 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:16.625850 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:16.797603 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.011338 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.103818 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.298307 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:17.512577 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:17.605193 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:17.798142 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.013543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.105035 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.297589 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:18.512084 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:18.605341 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:18.798244 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.012543 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.113664 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.298444 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:19.511599 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:19.606804 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:19.798859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.017663 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.106755 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.297564 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:20.512160 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:20.604958 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:20.797812 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.013732 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.105616 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.298921 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:21.510795 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:21.604820 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:21.797800 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.012174 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.114441 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.298237 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:37:22.513158 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:22.625275 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:22.799161 2903914 kapi.go:107] duration metric: took 33.005400732s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:37:23.013096 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.105126 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:23.511910 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:23.613223 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.014075 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.105028 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:24.511000 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:24.605248 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.014495 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.111460 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:25.512685 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:25.612274 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.013492 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.105721 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:26.511127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:26.605859 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.012240 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.105210 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:27.511957 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:27.604964 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.012182 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.104813 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:28.511895 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:28.605303 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.013053 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.104426 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:29.511346 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:29.603992 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.014157 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.118062 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:30.511690 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:30.604969 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.014750 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.105518 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:31.511111 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:31.605002 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.012362 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.104960 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:32.511542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:32.604045 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.013753 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.104350 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:33.512293 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:33.614483 2903914 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:37:34.016127 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:34.104734 2903914 kapi.go:107] duration metric: took 44.005304527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:37:34.510755 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.019903 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:35.510951 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.014603 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:36.510966 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.016710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:37.511532 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.013339 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:38.511853 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.012335 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:39.511569 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.013791 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:40.511129 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.011102 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:41.510793 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.015629 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:42.512029 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.011711 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:43.510801 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.011721 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:44.511165 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.039643 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:45.511176 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.011883 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:46.511761 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.011001 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:47.511048 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.012925 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:48.511632 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.010792 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:49.511167 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.018047 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:50.511473 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.013208 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:51.511710 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.011604 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:52.512230 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.011680 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:53.511429 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.012725 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:54.511977 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.013759 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:55.512155 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.012542 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:56.512221 2903914 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:37:57.011684 2903914 kapi.go:107] duration metric: took 1m9.505004156s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:42:51.886491 2903914 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=gcp-auth" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0923 11:42:51.886551 2903914 kapi.go:107] duration metric: took 6m0.000318633s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	W0923 11:42:51.886644 2903914 out.go:270] ! Enabling 'gcp-auth' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=gcp-auth pods: context deadline exceeded]
	I0923 11:42:51.888644 2903914 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0923 11:42:51.890515 2903914 addons.go:510] duration metric: took 6m13.553319939s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress]
	I0923 11:42:51.890575 2903914 start.go:246] waiting for cluster config update ...
	I0923 11:42:51.890600 2903914 start.go:255] writing updated cluster config ...
	I0923 11:42:51.890918 2903914 ssh_runner.go:195] Run: rm -f paused
	I0923 11:42:52.246444 2903914 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:42:52.248672 2903914 out.go:177] * Done! kubectl is now configured to use "addons-348379" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	052af9fc2b4c1       289a818c8d9c5       20 minutes ago      Running             controller                               0                   47eb6c64c7a7d       ingress-nginx-controller-bc57996ff-6wq45
	967992dcbaf35       ee6d597e62dc8       20 minutes ago      Running             csi-snapshotter                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c27f769c6bce0       642ded511e141       20 minutes ago      Running             csi-provisioner                          0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	0faa259c6b420       922312104da8a       20 minutes ago      Running             liveness-probe                           0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	994b2f0a53275       08f6b2990811a       20 minutes ago      Running             hostpath                                 0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	ccaa6826447e2       9a80d518f102c       20 minutes ago      Running             csi-attacher                             0                   3c219582bf398       csi-hostpath-attacher-0
	63bc3e7f0cc00       420193b27261a       20 minutes ago      Exited              patch                                    0                   6fd7a092c3487       ingress-nginx-admission-patch-bfw56
	c5db857da31e6       4d1e5c3e97420       20 minutes ago      Running             volume-snapshot-controller               0                   311300b7a364c       snapshot-controller-56fcc65765-d47ng
	d1ac50a6261dd       420193b27261a       20 minutes ago      Exited              create                                   0                   895d61d9c16ad       ingress-nginx-admission-create-fwt6v
	ac4d50ae15f93       4d1e5c3e97420       20 minutes ago      Running             volume-snapshot-controller               0                   698a149c7f280       snapshot-controller-56fcc65765-dchr7
	ed38cab12122b       0107d56dbc0be       20 minutes ago      Running             node-driver-registrar                    0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	5e5a8e69b34fd       7ce2150c8929b       21 minutes ago      Running             local-path-provisioner                   0                   9bc0d2214e2a7       local-path-provisioner-86d989889c-h5pl9
	cecddb85ce0f0       487fa743e1e22       21 minutes ago      Running             csi-resizer                              0                   85d2097f2da83       csi-hostpath-resizer-0
	a6cc88c765de7       1461903ec4fe9       21 minutes ago      Running             csi-external-health-monitor-controller   0                   b44dddc3de5c5       csi-hostpathplugin-zdwf8
	c37000ef28652       35508c2f890c4       21 minutes ago      Running             minikube-ingress-dns                     0                   d881770814e2f       kube-ingress-dns-minikube
	497f8c41b274e       2f6c962e7b831       21 minutes ago      Running             coredns                                  0                   d2d48d489a636       coredns-7c65d6cfc9-ppz9h
	b05bd4b18e280       ba04bb24b9575       21 minutes ago      Running             storage-provisioner                      0                   e816c2d6b5461       storage-provisioner
	a7e55167b7b39       6a23fa8fd2b78       21 minutes ago      Running             kindnet-cni                              0                   9389ee1ac67f5       kindnet-4kcdh
	005547c4c4723       24a140c548c07       21 minutes ago      Running             kube-proxy                               0                   a4defdda67173       kube-proxy-nqbmm
	9255b7a6f4a59       7f8aa378bb47d       21 minutes ago      Running             kube-scheduler                           0                   cae78edec7e41       kube-scheduler-addons-348379
	92413a7a8d6f6       279f381cb3736       21 minutes ago      Running             kube-controller-manager                  0                   731c7b79dc027       kube-controller-manager-addons-348379
	20e5f68e09619       d3f53a98c0a9d       21 minutes ago      Running             kube-apiserver                           0                   bdcb66b095a48       kube-apiserver-addons-348379
	8abc59946512a       27e3830e14027       21 minutes ago      Running             etcd                                     0                   a8c7d184a28f5       etcd-addons-348379
	
	
	==> containerd <==
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.034898756Z" level=info msg="Container to stop \"5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.118093506Z" level=info msg="shim disconnected" id=1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1 namespace=k8s.io
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.118159852Z" level=warning msg="cleaning up after shim disconnected" id=1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1 namespace=k8s.io
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.118171594Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.132298187Z" level=warning msg="cleanup warnings time=\"2024-09-23T11:56:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.135948296Z" level=info msg="TearDown network for sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" successfully"
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.135994073Z" level=info msg="StopPodSandbox for \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" returns successfully"
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.761040840Z" level=info msg="RemoveContainer for \"5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e\""
	Sep 23 11:56:21 addons-348379 containerd[816]: time="2024-09-23T11:56:21.767845151Z" level=info msg="RemoveContainer for \"5abcda5fe6ab9b1d99f6079f30350a972fd1f20004b42d2f23b22287cf86599e\" returns successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.351949634Z" level=info msg="StopPodSandbox for \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\""
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.360094173Z" level=info msg="TearDown network for sandbox \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\" successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.360136265Z" level=info msg="StopPodSandbox for \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\" returns successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.360784033Z" level=info msg="RemovePodSandbox for \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\""
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.360834880Z" level=info msg="Forcibly stopping sandbox \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\""
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.368383491Z" level=info msg="TearDown network for sandbox \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\" successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.374744752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.374916928Z" level=info msg="RemovePodSandbox \"ffa74d42d59a0302a60d34168e145fe6954b9815080fcda8ceb4d8316aebf291\" returns successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.377196261Z" level=info msg="StopPodSandbox for \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\""
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.377408363Z" level=info msg="TearDown network for sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.377439575Z" level=info msg="StopPodSandbox for \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" returns successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.378151703Z" level=info msg="RemovePodSandbox for \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\""
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.378206440Z" level=info msg="Forcibly stopping sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\""
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.378328474Z" level=info msg="TearDown network for sandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" successfully"
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.390230831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 23 11:56:33 addons-348379 containerd[816]: time="2024-09-23T11:56:33.390505956Z" level=info msg="RemovePodSandbox \"1161771f9554096e1657b068b6b2a085b1dadd98cac8c784b7f53d0044ff24d1\" returns successfully"
	
	
	==> coredns [497f8c41b274e14c72d9933f13fac36b6c4acf8def62c9f3205823623e58226d] <==
	[INFO] 10.244.0.8:44197 - 54267 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180151s
	[INFO] 10.244.0.8:41305 - 6638 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001704017s
	[INFO] 10.244.0.8:41305 - 35565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141999s
	[INFO] 10.244.0.8:40738 - 27223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080918s
	[INFO] 10.244.0.8:40738 - 10324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129353s
	[INFO] 10.244.0.8:56286 - 60697 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080098s
	[INFO] 10.244.0.8:56286 - 24093 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043076s
	[INFO] 10.244.0.8:43089 - 14336 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004978s
	[INFO] 10.244.0.8:43089 - 63494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035692s
	[INFO] 10.244.0.8:57209 - 23569 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041272s
	[INFO] 10.244.0.8:57209 - 18191 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041304s
	[INFO] 10.244.0.8:58538 - 8344 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004349043s
	[INFO] 10.244.0.8:58538 - 22686 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003876717s
	[INFO] 10.244.0.8:48601 - 29559 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000050166s
	[INFO] 10.244.0.8:48601 - 628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054556s
	[INFO] 10.244.0.8:43467 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093161s
	[INFO] 10.244.0.8:43467 - 32245 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042002s
	[INFO] 10.244.0.8:54486 - 7532 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066798s
	[INFO] 10.244.0.8:54486 - 50287 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038416s
	[INFO] 10.244.0.8:33247 - 25963 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052963s
	[INFO] 10.244.0.8:33247 - 9581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000040123s
	[INFO] 10.244.0.8:42814 - 61299 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001881616s
	[INFO] 10.244.0.8:42814 - 6802 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001453827s
	[INFO] 10.244.0.8:59224 - 13301 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046384s
	[INFO] 10.244.0.8:59224 - 30455 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000035372s
	
	
	==> describe nodes <==
	Name:               addons-348379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-348379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=addons-348379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_36_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-348379
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-348379"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:36:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-348379
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:58:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:53:24 +0000   Mon, 23 Sep 2024 11:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-348379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb92b9be866447b6af2ef85f12013e1b
	  System UUID:                84447a6a-e17a-42ff-ba46-fb82e93bc172
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6wq45    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-ppz9h                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 csi-hostpathplugin-zdwf8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 etcd-addons-348379                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-4kcdh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-addons-348379                250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-addons-348379       200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-nqbmm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-addons-348379                100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 snapshot-controller-56fcc65765-d47ng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 snapshot-controller-56fcc65765-dchr7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  local-path-storage          local-path-provisioner-86d989889c-h5pl9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  volcano-system              volcano-admission-7f54bd7598-s85bg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  volcano-system              volcano-admission-init-f2bhm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  volcano-system              volcano-controllers-5ff7c5d4db-w658s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  volcano-system              volcano-scheduler-79dc4b78bb-2vx88          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x7 over 21m)  kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node addons-348379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node addons-348379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node addons-348379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           21m                node-controller  Node addons-348379 event: Registered Node addons-348379 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [8abc59946512ab5ab4d902de194545051c7e577bd5196212bf85326ca705cd43] <==
	{"level":"info","ts":"2024-09-23T11:36:27.823441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.823470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T11:36:27.825628Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-348379 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:36:27.825777Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:36:27.827732Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.827852Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:36:27.831921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.856413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.847318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:36:27.847973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:36:27.857537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:36:27.856653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:46:28.587946Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1709}
	{"level":"info","ts":"2024-09-23T11:46:28.663638Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1709,"took":"75.159252ms","hash":39109330,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4308992,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T11:46:28.663685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":39109330,"revision":1709,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T11:51:28.595181Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2154}
	{"level":"info","ts":"2024-09-23T11:51:28.614611Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2154,"took":"18.702318ms","hash":1848337147,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":3706880,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-09-23T11:51:28.614664Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1848337147,"revision":2154,"compact-revision":1709}
	{"level":"info","ts":"2024-09-23T11:56:28.605480Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2714}
	{"level":"info","ts":"2024-09-23T11:56:28.626031Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2714,"took":"19.936209ms","hash":17698238,"current-db-size-bytes":8142848,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":3883008,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-23T11:56:28.626090Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":17698238,"revision":2714,"compact-revision":2154}
	
	
	==> kernel <==
	 11:58:13 up 1 day, 19:40,  0 users,  load average: 0.07, 0.22, 0.92
	Linux addons-348379 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a7e55167b7b39430011e4998f44d30eaec7edc15414dc232724161a09a27e599] <==
	I0923 11:56:10.318999       1 main.go:299] handling current node
	I0923 11:56:20.312288       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:56:20.312394       1 main.go:299] handling current node
	I0923 11:56:30.312138       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:56:30.312173       1 main.go:299] handling current node
	I0923 11:56:40.312631       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:56:40.312664       1 main.go:299] handling current node
	I0923 11:56:50.312963       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:56:50.313080       1 main.go:299] handling current node
	I0923 11:57:00.321594       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:57:00.321635       1 main.go:299] handling current node
	I0923 11:57:10.312612       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:57:10.312649       1 main.go:299] handling current node
	I0923 11:57:20.312857       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:57:20.312893       1 main.go:299] handling current node
	I0923 11:57:30.313618       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:57:30.313659       1 main.go:299] handling current node
	I0923 11:57:40.312458       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:57:40.312611       1 main.go:299] handling current node
	I0923 11:57:50.312339       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:57:50.312482       1 main.go:299] handling current node
	I0923 11:58:00.320974       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:58:00.321220       1 main.go:299] handling current node
	I0923 11:58:10.312729       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 11:58:10.312766       1 main.go:299] handling current node
	
	
	==> kube-apiserver [20e5f68e09619b9d622e831b0c429aae0e245dfb5c647d9e6fd9193c6cdfedac] <==
	E0923 11:55:54.870980       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:55:54.872656       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:55:54.974780       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:55:54.974823       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:55:54.976569       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	I0923 11:56:15.642001       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0923 11:56:20.984192       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 11:56:22.027789       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 11:56:26.604931       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	W0923 11:56:26.848394       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:56:26.848435       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:56:26.850299       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	I0923 11:56:26.915749       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.214.149"}
	W0923 11:56:54.880995       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:56:54.881036       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:56:54.882621       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:56:54.982579       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:56:54.982621       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:56:54.984350       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:57:54.890739       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:57:54.890787       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:57:54.892489       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	W0923 11:57:54.991115       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.216.18:443: connect: connection refused
	E0923 11:57:54.991159       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.216.18:443: connect: connection refused" logger="UnhandledError"
	W0923 11:57:54.992912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused
	
	
	==> kube-controller-manager [92413a7a8d6f6d058e961a0759535062b02d7d50e5012e81614e6290ce3465b3] <==
	W0923 11:56:23.353996       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:56:23.354139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:56:25.688828       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:56:25.688874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:56:31.171090       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0923 11:56:31.237210       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:56:31.237256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:56:38.147895       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0923 11:56:38.147931       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:56:38.464608       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 11:56:38.464666       1 shared_informer.go:320] Caches are synced for garbage collector
	W0923 11:56:41.691096       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:56:41.691148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 11:56:54.883218       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:56:54.884453       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:56:54.985801       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:56:54.987156       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	W0923 11:56:58.364048       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:56:58.364091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:57:37.043563       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:57:37.043616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 11:57:54.893303       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:57:54.896371       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:57:54.993691       1 job_controller.go:1709] "Unhandled Error" err="Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	E0923 11:57:54.995037       1 job_controller.go:601] "Unhandled Error" err="syncing job: Internal error occurred: failed calling webhook \"mutatepod.volcano.sh\": failed to call webhook: Post \"https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s\": dial tcp 10.105.15.104:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-proxy [005547c4c4723cd6ce4dac939ebab2b2d89e428b99ec971a179497842dcb5abe] <==
	I0923 11:36:39.711887       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:36:39.808570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 11:36:39.808647       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:36:39.869255       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:36:39.869320       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:36:39.872284       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:36:39.872851       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:36:39.872865       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:36:39.885041       1 config.go:199] "Starting service config controller"
	I0923 11:36:39.885075       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:36:39.885100       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:36:39.885105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:36:39.886870       1 config.go:328] "Starting node config controller"
	I0923 11:36:39.886882       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:36:39.986150       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:36:39.986235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:36:39.987256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9255b7a6f4a590493be04a5bbbdf14b0efddfe1f321b8a25d2eed1055c6741df] <==
	W0923 11:36:31.587742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:36:31.587843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:36:31.587943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:36:31.588052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:36:31.588221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.588380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.586991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:36:31.588540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:36:31.588655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:36:31.588819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:36:31.588921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:36:31.589185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:36:31.587261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:36:31.589294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:36:32.574074       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:56:40 addons-348379 kubelet[1462]: E0923 11:56:40.119565    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:56:42 addons-348379 kubelet[1462]: E0923 11:56:42.120582    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:56:52 addons-348379 kubelet[1462]: E0923 11:56:52.116748    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:56:53 addons-348379 kubelet[1462]: E0923 11:56:53.117434    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:56:56 addons-348379 kubelet[1462]: E0923 11:56:56.117811    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:56:58 addons-348379 kubelet[1462]: E0923 11:56:58.116355    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[admission-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="volcano-system/volcano-admission-7f54bd7598-s85bg" podUID="0b71580f-19db-4c01-80d4-e477def83079"
	Sep 23 11:57:03 addons-348379 kubelet[1462]: E0923 11:57:03.117941    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:57:04 addons-348379 kubelet[1462]: E0923 11:57:04.117310    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:57:11 addons-348379 kubelet[1462]: E0923 11:57:11.117432    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:57:14 addons-348379 kubelet[1462]: E0923 11:57:14.859055    1462 secret.go:188] Couldn't get secret volcano-system/volcano-admission-secret: secret "volcano-admission-secret" not found
	Sep 23 11:57:14 addons-348379 kubelet[1462]: E0923 11:57:14.859138    1462 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b71580f-19db-4c01-80d4-e477def83079-admission-certs podName:0b71580f-19db-4c01-80d4-e477def83079 nodeName:}" failed. No retries permitted until 2024-09-23 11:59:16.859121056 +0000 UTC m=+1363.882511315 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "admission-certs" (UniqueName: "kubernetes.io/secret/0b71580f-19db-4c01-80d4-e477def83079-admission-certs") pod "volcano-admission-7f54bd7598-s85bg" (UID: "0b71580f-19db-4c01-80d4-e477def83079") : secret "volcano-admission-secret" not found
	Sep 23 11:57:15 addons-348379 kubelet[1462]: E0923 11:57:15.117797    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:57:18 addons-348379 kubelet[1462]: E0923 11:57:18.117450    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:57:26 addons-348379 kubelet[1462]: E0923 11:57:26.117695    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:57:27 addons-348379 kubelet[1462]: E0923 11:57:27.118003    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:57:30 addons-348379 kubelet[1462]: E0923 11:57:30.123053    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:57:37 addons-348379 kubelet[1462]: E0923 11:57:37.117689    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:57:40 addons-348379 kubelet[1462]: E0923 11:57:40.118013    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:57:41 addons-348379 kubelet[1462]: E0923 11:57:41.118276    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:57:48 addons-348379 kubelet[1462]: E0923 11:57:48.118271    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:57:51 addons-348379 kubelet[1462]: E0923 11:57:51.117021    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:57:56 addons-348379 kubelet[1462]: E0923 11:57:56.117397    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	Sep 23 11:58:01 addons-348379 kubelet[1462]: E0923 11:58:01.117965    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-controller-manager:v1.10.0@sha256:5084bdd1edee7c7d676fc1713e02051b975f69839be4a8278a59d4a7a59ad8de\\\"\"" pod="volcano-system/volcano-controllers-5ff7c5d4db-w658s" podUID="eb364615-4484-4d2d-80e2-1bf54875b4a1"
	Sep 23 11:58:06 addons-348379 kubelet[1462]: E0923 11:58:06.117493    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-scheduler:v1.10.0@sha256:b618879e2ff768f69fb94084f0c644d2278d31e4fa17c898b8763fc7e1648882\\\"\"" pod="volcano-system/volcano-scheduler-79dc4b78bb-2vx88" podUID="74b4a12a-ef6c-40d9-a5f6-e73012730d8a"
	Sep 23 11:58:09 addons-348379 kubelet[1462]: E0923 11:58:09.119540    1462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/docker.io/volcanosh/vc-webhook-manager:v1.10.0@sha256:f56fecf20af14dd9ebea12eb6390014b51a44c552742d4e15d25876438c46e1e\\\"\"" pod="volcano-system/volcano-admission-init-f2bhm" podUID="5ae35267-183a-42ba-96bc-03dac14139ac"
	
	
	==> storage-provisioner [b05bd4b18e2804c1706af6011e03e349f643f23d7f968ca74ffb0f2eaf78047d] <==
	I0923 11:36:43.550112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:36:43.591903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:36:43.592066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:36:43.604224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:36:43.604420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	I0923 11:36:43.605459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cdf468f-763c-4a54-98e3-d90ea0e2e8e5", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348379_e926b365-1f0f-4822-899e-75d077991921 became leader
	I0923 11:36:43.707026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348379_e926b365-1f0f-4822-899e-75d077991921!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-348379 -n addons-348379
helpers_test.go:261: (dbg) Run:  kubectl --context addons-348379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1 (92.771399ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fwt6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bfw56" not found
	Error from server (NotFound): pods "volcano-admission-7f54bd7598-s85bg" not found
	Error from server (NotFound): pods "volcano-admission-init-f2bhm" not found
	Error from server (NotFound): pods "volcano-controllers-5ff7c5d4db-w658s" not found
	Error from server (NotFound): pods "volcano-scheduler-79dc4b78bb-2vx88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-348379 describe pod ingress-nginx-admission-create-fwt6v ingress-nginx-admission-patch-bfw56 volcano-admission-7f54bd7598-s85bg volcano-admission-init-f2bhm volcano-controllers-5ff7c5d4db-w658s volcano-scheduler-79dc4b78bb-2vx88: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (483.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-348379 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-348379 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:926: (dbg) Non-zero exit: kubectl --context addons-348379 apply -f testdata/storage-provisioner-rancher/pod.yaml: exit status 1 (205.186012ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): error when creating "testdata/storage-provisioner-rancher/pod.yaml": Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.15.104:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:928: kubectl apply pod.yaml failed: args "kubectl --context addons-348379 apply -f testdata/storage-provisioner-rancher/pod.yaml": exit status 1
--- FAIL: TestAddons/parallel/LocalPath (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (379.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-605045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-605045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m15.041367248s)

                                                
                                                
-- stdout --
	* [old-k8s-version-605045] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-605045" primary control-plane node in "old-k8s-version-605045" cluster
	* Pulling base image v0.0.45-1726784731-19672 ...
	* Restarting existing docker container for "old-k8s-version-605045" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-605045 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:44:01.087349 3102485 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:44:01.087699 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:44:01.087710 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:44:01.087716 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:44:01.087986 3102485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:44:01.088377 3102485 out.go:352] Setting JSON to false
	I0923 12:44:01.089351 3102485 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":159988,"bootTime":1726935453,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 12:44:01.089415 3102485 start.go:139] virtualization:  
	I0923 12:44:01.091829 3102485 out.go:177] * [old-k8s-version-605045] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 12:44:01.093926 3102485 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 12:44:01.093992 3102485 notify.go:220] Checking for updates...
	I0923 12:44:01.100268 3102485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:44:01.102146 3102485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:44:01.103646 3102485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 12:44:01.105295 3102485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 12:44:01.107150 3102485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:44:01.109359 3102485 config.go:182] Loaded profile config "old-k8s-version-605045": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0923 12:44:01.112454 3102485 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 12:44:01.114237 3102485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:44:01.156878 3102485 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 12:44:01.157019 3102485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:44:01.241163 3102485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-23 12:44:01.229465942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:44:01.241281 3102485 docker.go:318] overlay module found
	I0923 12:44:01.243354 3102485 out.go:177] * Using the docker driver based on existing profile
	I0923 12:44:01.245240 3102485 start.go:297] selected driver: docker
	I0923 12:44:01.245257 3102485 start.go:901] validating driver "docker" against &{Name:old-k8s-version-605045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-605045 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:44:01.245384 3102485 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:44:01.246043 3102485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:44:01.318843 3102485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-23 12:44:01.306975603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:44:01.319270 3102485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:44:01.319316 3102485 cni.go:84] Creating CNI manager for ""
	I0923 12:44:01.319372 3102485 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 12:44:01.319431 3102485 start.go:340] cluster config:
	{Name:old-k8s-version-605045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-605045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:44:01.321993 3102485 out.go:177] * Starting "old-k8s-version-605045" primary control-plane node in "old-k8s-version-605045" cluster
	I0923 12:44:01.324405 3102485 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 12:44:01.326397 3102485 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 12:44:01.328348 3102485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 12:44:01.328422 3102485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 12:44:01.328435 3102485 cache.go:56] Caching tarball of preloaded images
	I0923 12:44:01.328539 3102485 preload.go:172] Found /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 12:44:01.328554 3102485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0923 12:44:01.328678 3102485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/config.json ...
	I0923 12:44:01.328933 3102485 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 12:44:01.352146 3102485 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 12:44:01.352170 3102485 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 12:44:01.352189 3102485 cache.go:194] Successfully downloaded all kic artifacts
	I0923 12:44:01.352213 3102485 start.go:360] acquireMachinesLock for old-k8s-version-605045: {Name:mk225242fd48ee1df3a44019e0bef95cc5d4a595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:44:01.352280 3102485 start.go:364] duration metric: took 45.112µs to acquireMachinesLock for "old-k8s-version-605045"
	I0923 12:44:01.352307 3102485 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:44:01.352326 3102485 fix.go:54] fixHost starting: 
	I0923 12:44:01.352602 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:01.373924 3102485 fix.go:112] recreateIfNeeded on old-k8s-version-605045: state=Stopped err=<nil>
	W0923 12:44:01.373960 3102485 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:44:01.379252 3102485 out.go:177] * Restarting existing docker container for "old-k8s-version-605045" ...
	I0923 12:44:01.381295 3102485 cli_runner.go:164] Run: docker start old-k8s-version-605045
	I0923 12:44:01.754970 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:01.784918 3102485 kic.go:430] container "old-k8s-version-605045" state is running.
	I0923 12:44:01.785321 3102485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-605045
	I0923 12:44:01.811580 3102485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/config.json ...
	I0923 12:44:01.811816 3102485 machine.go:93] provisionDockerMachine start ...
	I0923 12:44:01.811880 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:01.839513 3102485 main.go:141] libmachine: Using SSH client type: native
	I0923 12:44:01.839777 3102485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 42087 <nil> <nil>}
	I0923 12:44:01.839788 3102485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:44:01.840631 3102485 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33906->127.0.0.1:42087: read: connection reset by peer
	I0923 12:44:04.998832 3102485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-605045
	
	I0923 12:44:04.998895 3102485 ubuntu.go:169] provisioning hostname "old-k8s-version-605045"
	I0923 12:44:04.998998 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:05.044787 3102485 main.go:141] libmachine: Using SSH client type: native
	I0923 12:44:05.045045 3102485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 42087 <nil> <nil>}
	I0923 12:44:05.045057 3102485 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-605045 && echo "old-k8s-version-605045" | sudo tee /etc/hostname
	I0923 12:44:05.226868 3102485 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-605045
	
	I0923 12:44:05.227009 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:05.269026 3102485 main.go:141] libmachine: Using SSH client type: native
	I0923 12:44:05.269363 3102485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 42087 <nil> <nil>}
	I0923 12:44:05.269391 3102485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-605045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-605045/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-605045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:44:05.432136 3102485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:44:05.432209 3102485 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 12:44:05.432257 3102485 ubuntu.go:177] setting up certificates
	I0923 12:44:05.432298 3102485 provision.go:84] configureAuth start
	I0923 12:44:05.432382 3102485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-605045
	I0923 12:44:05.464799 3102485 provision.go:143] copyHostCerts
	I0923 12:44:05.464877 3102485 exec_runner.go:144] found /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem, removing ...
	I0923 12:44:05.464896 3102485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem
	I0923 12:44:05.464976 3102485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 12:44:05.465088 3102485 exec_runner.go:144] found /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem, removing ...
	I0923 12:44:05.465093 3102485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem
	I0923 12:44:05.465122 3102485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 12:44:05.465180 3102485 exec_runner.go:144] found /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem, removing ...
	I0923 12:44:05.465190 3102485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem
	I0923 12:44:05.465215 3102485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 12:44:05.465269 3102485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-605045 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-605045]
	I0923 12:44:05.789801 3102485 provision.go:177] copyRemoteCerts
	I0923 12:44:05.789883 3102485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:44:05.789929 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:05.809183 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:05.913745 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:44:05.955970 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0923 12:44:05.984964 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:44:06.032578 3102485 provision.go:87] duration metric: took 600.236628ms to configureAuth
	I0923 12:44:06.032611 3102485 ubuntu.go:193] setting minikube options for container-runtime
	I0923 12:44:06.032992 3102485 config.go:182] Loaded profile config "old-k8s-version-605045": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0923 12:44:06.033028 3102485 machine.go:96] duration metric: took 4.221201798s to provisionDockerMachine
	I0923 12:44:06.033045 3102485 start.go:293] postStartSetup for "old-k8s-version-605045" (driver="docker")
	I0923 12:44:06.033060 3102485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:44:06.033579 3102485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:44:06.033672 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:06.075547 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:06.184883 3102485 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:44:06.189821 3102485 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 12:44:06.189877 3102485 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 12:44:06.189896 3102485 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 12:44:06.189904 3102485 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 12:44:06.189931 3102485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 12:44:06.190021 3102485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 12:44:06.190131 3102485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem -> 29031582.pem in /etc/ssl/certs
	I0923 12:44:06.190279 3102485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:44:06.205735 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem --> /etc/ssl/certs/29031582.pem (1708 bytes)
	I0923 12:44:06.247580 3102485 start.go:296] duration metric: took 214.514442ms for postStartSetup
	I0923 12:44:06.247711 3102485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:44:06.247763 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:06.274620 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:06.369740 3102485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 12:44:06.375792 3102485 fix.go:56] duration metric: took 5.023467904s for fixHost
	I0923 12:44:06.375815 3102485 start.go:83] releasing machines lock for "old-k8s-version-605045", held for 5.023521541s
	I0923 12:44:06.375897 3102485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-605045
	I0923 12:44:06.402166 3102485 ssh_runner.go:195] Run: cat /version.json
	I0923 12:44:06.402216 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:06.402463 3102485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:44:06.402513 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:06.439473 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:06.440944 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:06.547188 3102485 ssh_runner.go:195] Run: systemctl --version
	I0923 12:44:06.722140 3102485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:44:06.727305 3102485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 12:44:06.754486 3102485 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 12:44:06.754568 3102485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:44:06.767198 3102485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 12:44:06.767225 3102485 start.go:495] detecting cgroup driver to use...
	I0923 12:44:06.767259 3102485 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 12:44:06.767379 3102485 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:44:06.790648 3102485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:44:06.805855 3102485 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:44:06.806064 3102485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:44:06.828015 3102485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:44:06.843604 3102485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:44:06.994695 3102485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:44:07.147856 3102485 docker.go:233] disabling docker service ...
	I0923 12:44:07.148010 3102485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:44:07.166247 3102485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:44:07.183926 3102485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:44:07.332055 3102485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:44:07.498946 3102485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:44:07.517892 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:44:07.537379 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0923 12:44:07.550213 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:44:07.562913 3102485 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:44:07.563037 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:44:07.575087 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:44:07.586291 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:44:07.598274 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:44:07.609446 3102485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:44:07.620312 3102485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:44:07.631016 3102485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:44:07.641448 3102485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:44:07.656821 3102485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:44:07.807992 3102485 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:44:08.081398 3102485 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 12:44:08.081495 3102485 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 12:44:08.086003 3102485 start.go:563] Will wait 60s for crictl version
	I0923 12:44:08.086083 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:44:08.090933 3102485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:44:08.141930 3102485 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 12:44:08.142003 3102485 ssh_runner.go:195] Run: containerd --version
	I0923 12:44:08.166053 3102485 ssh_runner.go:195] Run: containerd --version
	I0923 12:44:08.192932 3102485 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0923 12:44:08.195048 3102485 cli_runner.go:164] Run: docker network inspect old-k8s-version-605045 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:44:08.221268 3102485 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0923 12:44:08.239556 3102485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:44:08.256709 3102485 kubeadm.go:883] updating cluster {Name:old-k8s-version-605045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-605045 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:44:08.256846 3102485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 12:44:08.256909 3102485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:44:08.333304 3102485 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 12:44:08.333382 3102485 containerd.go:534] Images already preloaded, skipping extraction
	I0923 12:44:08.333477 3102485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:44:08.402317 3102485 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 12:44:08.402343 3102485 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:44:08.402352 3102485 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0923 12:44:08.402470 3102485 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-605045 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-605045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:44:08.402565 3102485 ssh_runner.go:195] Run: sudo crictl info
	I0923 12:44:08.483815 3102485 cni.go:84] Creating CNI manager for ""
	I0923 12:44:08.483844 3102485 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 12:44:08.483854 3102485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:44:08.483874 3102485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-605045 NodeName:old-k8s-version-605045 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0923 12:44:08.484003 3102485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-605045"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:44:08.484073 3102485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0923 12:44:08.496746 3102485 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:44:08.496823 3102485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:44:08.510528 3102485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0923 12:44:08.532228 3102485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:44:08.553987 3102485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0923 12:44:08.580681 3102485 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0923 12:44:08.586052 3102485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:44:08.607407 3102485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:44:08.743270 3102485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:44:08.762023 3102485 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045 for IP: 192.168.85.2
	I0923 12:44:08.762047 3102485 certs.go:194] generating shared ca certs ...
	I0923 12:44:08.762064 3102485 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:44:08.762200 3102485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 12:44:08.762263 3102485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 12:44:08.762277 3102485 certs.go:256] generating profile certs ...
	I0923 12:44:08.762362 3102485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.key
	I0923 12:44:08.762435 3102485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/apiserver.key.d78aadc0
	I0923 12:44:08.762480 3102485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/proxy-client.key
	I0923 12:44:08.762593 3102485 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/2903158.pem (1338 bytes)
	W0923 12:44:08.762626 3102485 certs.go:480] ignoring /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/2903158_empty.pem, impossibly tiny 0 bytes
	I0923 12:44:08.762639 3102485 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:44:08.762663 3102485 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 12:44:08.762689 3102485 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:44:08.762715 3102485 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 12:44:08.762765 3102485 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem (1708 bytes)
	I0923 12:44:08.763409 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:44:08.822683 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:44:08.889472 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:44:08.938974 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:44:08.998882 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 12:44:09.080621 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:44:09.131867 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:44:09.183633 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 12:44:09.232095 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem --> /usr/share/ca-certificates/29031582.pem (1708 bytes)
	I0923 12:44:09.278813 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:44:09.319821 3102485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/2903158.pem --> /usr/share/ca-certificates/2903158.pem (1338 bytes)
	I0923 12:44:09.360407 3102485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:44:09.389676 3102485 ssh_runner.go:195] Run: openssl version
	I0923 12:44:09.404107 3102485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:44:09.420854 3102485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:44:09.425074 3102485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:44:09.425144 3102485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:44:09.434803 3102485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:44:09.448634 3102485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903158.pem && ln -fs /usr/share/ca-certificates/2903158.pem /etc/ssl/certs/2903158.pem"
	I0923 12:44:09.468461 3102485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903158.pem
	I0923 12:44:09.472753 3102485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:06 /usr/share/ca-certificates/2903158.pem
	I0923 12:44:09.472823 3102485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903158.pem
	I0923 12:44:09.484169 3102485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2903158.pem /etc/ssl/certs/51391683.0"
	I0923 12:44:09.495584 3102485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29031582.pem && ln -fs /usr/share/ca-certificates/29031582.pem /etc/ssl/certs/29031582.pem"
	I0923 12:44:09.510509 3102485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29031582.pem
	I0923 12:44:09.514376 3102485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:06 /usr/share/ca-certificates/29031582.pem
	I0923 12:44:09.514453 3102485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29031582.pem
	I0923 12:44:09.526029 3102485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29031582.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:44:09.538170 3102485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:44:09.543948 3102485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 12:44:09.553748 3102485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 12:44:09.561511 3102485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 12:44:09.575584 3102485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 12:44:09.588054 3102485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 12:44:09.600075 3102485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 12:44:09.612036 3102485 kubeadm.go:392] StartCluster: {Name:old-k8s-version-605045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-605045 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:44:09.612135 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 12:44:09.612197 3102485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:44:09.684946 3102485 cri.go:89] found id: "8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:44:09.684970 3102485 cri.go:89] found id: "7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:44:09.684976 3102485 cri.go:89] found id: "5b397ef35967b921c43f5c7afe37dbd498e745c39e94adb4564df51ed332ea5d"
	I0923 12:44:09.684980 3102485 cri.go:89] found id: "c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:44:09.684984 3102485 cri.go:89] found id: "7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:44:09.684987 3102485 cri.go:89] found id: "1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:44:09.684992 3102485 cri.go:89] found id: "2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:44:09.684996 3102485 cri.go:89] found id: "2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:44:09.684999 3102485 cri.go:89] found id: ""
	I0923 12:44:09.685051 3102485 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0923 12:44:09.697747 3102485 cri.go:116] JSON = null
	W0923 12:44:09.697801 3102485 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0923 12:44:09.697871 3102485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:44:09.715348 3102485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 12:44:09.715369 3102485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 12:44:09.715422 3102485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 12:44:09.730120 3102485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:44:09.730583 3102485 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-605045" does not appear in /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:44:09.730695 3102485 kubeconfig.go:62] /home/jenkins/minikube-integration/19688-2897765/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-605045" cluster setting kubeconfig missing "old-k8s-version-605045" context setting]
	I0923 12:44:09.730959 3102485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:44:09.732282 3102485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 12:44:09.741914 3102485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0923 12:44:09.741949 3102485 kubeadm.go:597] duration metric: took 26.572933ms to restartPrimaryControlPlane
	I0923 12:44:09.741960 3102485 kubeadm.go:394] duration metric: took 129.935628ms to StartCluster
	I0923 12:44:09.741975 3102485 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:44:09.742040 3102485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:44:09.742651 3102485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:44:09.742853 3102485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 12:44:09.743201 3102485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:44:09.743271 3102485 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-605045"
	I0923 12:44:09.743408 3102485 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-605045"
	W0923 12:44:09.743414 3102485 addons.go:243] addon storage-provisioner should already be in state true
	I0923 12:44:09.743452 3102485 host.go:66] Checking if "old-k8s-version-605045" exists ...
	I0923 12:44:09.743937 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:09.744400 3102485 config.go:182] Loaded profile config "old-k8s-version-605045": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0923 12:44:09.744484 3102485 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-605045"
	I0923 12:44:09.744520 3102485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-605045"
	I0923 12:44:09.744824 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:09.745346 3102485 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-605045"
	I0923 12:44:09.745371 3102485 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-605045"
	W0923 12:44:09.745380 3102485 addons.go:243] addon metrics-server should already be in state true
	I0923 12:44:09.745405 3102485 host.go:66] Checking if "old-k8s-version-605045" exists ...
	I0923 12:44:09.745831 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:09.748403 3102485 addons.go:69] Setting dashboard=true in profile "old-k8s-version-605045"
	I0923 12:44:09.748528 3102485 addons.go:234] Setting addon dashboard=true in "old-k8s-version-605045"
	W0923 12:44:09.748545 3102485 addons.go:243] addon dashboard should already be in state true
	I0923 12:44:09.748576 3102485 host.go:66] Checking if "old-k8s-version-605045" exists ...
	I0923 12:44:09.748758 3102485 out.go:177] * Verifying Kubernetes components...
	I0923 12:44:09.749991 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:09.751176 3102485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:44:09.815582 3102485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:44:09.817801 3102485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:44:09.817823 3102485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:44:09.817900 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:09.831468 3102485 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0923 12:44:09.833471 3102485 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0923 12:44:09.834866 3102485 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-605045"
	W0923 12:44:09.834894 3102485 addons.go:243] addon default-storageclass should already be in state true
	I0923 12:44:09.834918 3102485 host.go:66] Checking if "old-k8s-version-605045" exists ...
	I0923 12:44:09.835373 3102485 cli_runner.go:164] Run: docker container inspect old-k8s-version-605045 --format={{.State.Status}}
	I0923 12:44:09.835598 3102485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0923 12:44:09.836404 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0923 12:44:09.836424 3102485 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0923 12:44:09.836501 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:09.837654 3102485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:44:09.837684 3102485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:44:09.837744 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:09.879677 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:09.916761 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:09.918728 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:09.923592 3102485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:44:09.923617 3102485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:44:09.923681 3102485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-605045
	I0923 12:44:09.954645 3102485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42087 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/old-k8s-version-605045/id_rsa Username:docker}
	I0923 12:44:10.014285 3102485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:44:10.072019 3102485 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-605045" to be "Ready" ...
	I0923 12:44:10.158398 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:44:10.170945 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0923 12:44:10.171019 3102485 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0923 12:44:10.239681 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0923 12:44:10.239755 3102485 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0923 12:44:10.253695 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:44:10.255390 3102485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:44:10.255451 3102485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0923 12:44:10.329073 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0923 12:44:10.329226 3102485 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0923 12:44:10.355159 3102485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:44:10.355223 3102485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:44:10.386702 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0923 12:44:10.386726 3102485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0923 12:44:10.430294 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0923 12:44:10.430321 3102485 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0923 12:44:10.487292 3102485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:44:10.487365 3102485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:44:10.544152 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0923 12:44:10.544224 3102485 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0923 12:44:10.560276 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.560310 3102485 retry.go:31] will retry after 191.983356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.577535 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:44:10.617852 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0923 12:44:10.617928 3102485 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0923 12:44:10.636171 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.636254 3102485 retry.go:31] will retry after 339.596744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.667813 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0923 12:44:10.667888 3102485 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0923 12:44:10.718979 3102485 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:10.719053 3102485 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0923 12:44:10.750414 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:10.752414 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 12:44:10.788539 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.788573 3102485 retry.go:31] will retry after 154.8038ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.944405 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:10.946450 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.946483 3102485 retry.go:31] will retry after 226.876954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:10.946538 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.946551 3102485 retry.go:31] will retry after 497.821315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:10.976860 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 12:44:11.114586 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.114622 3102485 retry.go:31] will retry after 490.507338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:11.149821 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.149856 3102485 retry.go:31] will retry after 318.133375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.174142 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 12:44:11.286975 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.287072 3102485 retry.go:31] will retry after 192.96777ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.445418 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:44:11.468751 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:44:11.480241 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:11.606002 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:11.692898 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.692975 3102485 retry.go:31] will retry after 589.236623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:11.773393 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.773472 3102485 retry.go:31] will retry after 651.919996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:11.807590 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.807689 3102485 retry.go:31] will retry after 737.003205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:11.814918 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:11.814956 3102485 retry.go:31] will retry after 783.761031ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.072632 3102485 node_ready.go:53] error getting node "old-k8s-version-605045": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-605045": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 12:44:12.283049 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 12:44:12.389934 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.389974 3102485 retry.go:31] will retry after 1.165294876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.426189 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 12:44:12.499040 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.499070 3102485 retry.go:31] will retry after 441.708268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.545231 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:12.599083 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:12.632027 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.632058 3102485 retry.go:31] will retry after 761.80126ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:12.703563 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.703597 3102485 retry.go:31] will retry after 576.888728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:12.940986 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 12:44:13.065918 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.065966 3102485 retry.go:31] will retry after 1.372537837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.280839 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:13.373138 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.373184 3102485 retry.go:31] will retry after 778.918432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.394475 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 12:44:13.483130 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.483174 3102485 retry.go:31] will retry after 941.531266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.556317 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 12:44:13.664130 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:13.664168 3102485 retry.go:31] will retry after 1.530430265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:14.072801 3102485 node_ready.go:53] error getting node "old-k8s-version-605045": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-605045": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 12:44:14.153155 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:14.270606 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:14.270640 3102485 retry.go:31] will retry after 1.366500149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:14.425000 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:14.439266 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 12:44:14.516901 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:14.516936 3102485 retry.go:31] will retry after 2.020433238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:14.545482 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:14.545532 3102485 retry.go:31] will retry after 2.067837068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:15.195357 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 12:44:15.332174 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:15.332208 3102485 retry.go:31] will retry after 2.534677864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:15.638185 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:15.729287 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:15.729330 3102485 retry.go:31] will retry after 1.506332209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:16.073279 3102485 node_ready.go:53] error getting node "old-k8s-version-605045": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-605045": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 12:44:16.538054 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0923 12:44:16.608324 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:16.608360 3102485 retry.go:31] will retry after 1.576341399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:16.613566 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 12:44:16.689599 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:16.689638 3102485 retry.go:31] will retry after 1.665316473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:17.236650 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0923 12:44:17.365271 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:17.365363 3102485 retry.go:31] will retry after 5.101349325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:17.867031 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0923 12:44:17.982248 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:17.982283 3102485 retry.go:31] will retry after 4.028563817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:18.073378 3102485 node_ready.go:53] error getting node "old-k8s-version-605045": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-605045": dial tcp 192.168.85.2:8443: connect: connection refused
	I0923 12:44:18.185820 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:18.356036 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0923 12:44:18.626369 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:18.626406 3102485 retry.go:31] will retry after 5.61188231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 12:44:18.930992 3102485 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:18.931030 3102485 retry.go:31] will retry after 2.624806836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 12:44:21.556470 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:44:22.011065 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:44:22.467566 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:44:24.239379 3102485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:44:29.242123 3102485 node_ready.go:49] node "old-k8s-version-605045" has status "Ready":"True"
	I0923 12:44:29.242164 3102485 node_ready.go:38] duration metric: took 19.170103894s for node "old-k8s-version-605045" to be "Ready" ...
	I0923 12:44:29.242182 3102485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:44:29.381435 3102485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-pb95z" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:29.442082 3102485 pod_ready.go:93] pod "coredns-74ff55c5b-pb95z" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:29.442160 3102485 pod_ready.go:82] duration metric: took 60.676392ms for pod "coredns-74ff55c5b-pb95z" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:29.442186 3102485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:29.459384 3102485 pod_ready.go:93] pod "etcd-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:29.459457 3102485 pod_ready.go:82] duration metric: took 17.248479ms for pod "etcd-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:29.459487 3102485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:29.486874 3102485 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"True"
	I0923 12:44:29.486946 3102485 pod_ready.go:82] duration metric: took 27.436322ms for pod "kube-apiserver-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:29.486973 3102485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:44:30.322691 3102485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.76610799s)
	I0923 12:44:30.387879 3102485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.376765342s)
	I0923 12:44:30.536576 3102485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.068972613s)
	I0923 12:44:30.536655 3102485 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-605045"
	I0923 12:44:30.595156 3102485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.355702867s)
	I0923 12:44:30.599206 3102485 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-605045 addons enable metrics-server
	
	I0923 12:44:30.602777 3102485 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0923 12:44:30.605613 3102485 addons.go:510] duration metric: took 20.862409042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0923 12:44:31.493314 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:33.992763 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:35.994902 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:38.493457 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:40.493789 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:42.493878 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:44.993659 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:46.994117 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:48.996838 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:50.998691 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:53.022233 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:55.494758 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:57.994077 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:44:59.994505 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:01.994596 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:04.494404 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:06.993708 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:09.072448 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:11.493700 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:13.493962 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:15.494158 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:17.496501 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:19.993060 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:22.015003 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:24.539397 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:26.994612 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:29.497184 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:31.501408 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:33.994220 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:36.493434 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:38.494618 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:40.993697 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:42.993796 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:45.494023 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:47.993809 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:49.493816 3102485 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:49.493848 3102485 pod_ready.go:82] duration metric: took 1m20.006846553s for pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:49.493861 3102485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9m2fb" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:49.504701 3102485 pod_ready.go:93] pod "kube-proxy-9m2fb" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:49.504776 3102485 pod_ready.go:82] duration metric: took 10.905739ms for pod "kube-proxy-9m2fb" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:49.504803 3102485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:51.512244 3102485 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:52.510949 3102485 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:52.510976 3102485 pod_ready.go:82] duration metric: took 3.006151928s for pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:52.510987 3102485 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:54.517408 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:56.517938 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:59.018038 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:01.521178 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:04.017169 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:06.018012 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:08.020342 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:10.050466 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:12.519575 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:15.028066 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:17.516764 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:19.517985 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:21.527598 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:24.018262 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:26.516738 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:28.517429 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:30.517488 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:32.517691 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:34.518155 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:37.019755 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:39.517509 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:41.517906 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:44.018200 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:46.518620 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:49.017532 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:51.517271 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:53.518286 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:55.518511 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:58.018393 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:00.040850 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:02.519588 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:05.017851 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:07.517609 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:10.120171 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:12.516830 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:14.518809 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:17.017796 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:19.517358 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:21.521109 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:24.017997 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:26.517391 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:28.518031 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:31.017891 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:33.022935 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:35.025532 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:37.517416 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:39.518031 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:41.527952 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:44.017386 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:46.017790 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:48.018509 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:50.019200 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:52.517046 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:54.519238 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:57.017644 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:59.018465 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:01.517860 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:04.020306 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:06.518007 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:08.518217 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:10.518374 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:13.017127 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:15.025052 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:17.518912 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:19.519575 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:22.017866 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:24.519753 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:27.018096 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:29.517084 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:31.518247 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:34.017560 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:36.018135 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:38.018563 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:40.033260 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:42.516878 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:44.518028 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:47.017558 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:49.017887 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:51.104384 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:53.518375 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:56.017073 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:58.018046 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:00.051557 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:02.518428 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:05.018309 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:07.517421 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:09.517540 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:12.017948 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:14.518324 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:16.518725 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:19.017071 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:21.018952 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:23.517806 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:26.017683 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:28.018157 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:30.030456 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:32.517329 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:34.517424 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:36.517606 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:39.017532 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:41.516456 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:43.522639 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:46.019113 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:48.022948 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:50.517786 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:52.520527 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:52.520556 3102485 pod_ready.go:82] duration metric: took 4m0.009561565s for pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace to be "Ready" ...
	E0923 12:49:52.520568 3102485 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 12:49:52.520575 3102485 pod_ready.go:39] duration metric: took 5m23.27838172s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:49:52.520590 3102485 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:49:52.520620 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:49:52.520680 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:49:52.582116 3102485 cri.go:89] found id: "b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:49:52.582139 3102485 cri.go:89] found id: "2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:49:52.582143 3102485 cri.go:89] found id: ""
	I0923 12:49:52.582150 3102485 logs.go:276] 2 containers: [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85]
	I0923 12:49:52.582208 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.586211 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.590182 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:49:52.590251 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:49:52.643577 3102485 cri.go:89] found id: "ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:49:52.643596 3102485 cri.go:89] found id: "1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:49:52.643601 3102485 cri.go:89] found id: ""
	I0923 12:49:52.643608 3102485 logs.go:276] 2 containers: [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2]
	I0923 12:49:52.643669 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.649826 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.653847 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:49:52.653956 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:49:52.723423 3102485 cri.go:89] found id: "0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:49:52.723449 3102485 cri.go:89] found id: "8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:49:52.723465 3102485 cri.go:89] found id: ""
	I0923 12:49:52.723473 3102485 logs.go:276] 2 containers: [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993]
	I0923 12:49:52.723539 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.728406 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.739893 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:49:52.739971 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:49:52.794862 3102485 cri.go:89] found id: "df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:49:52.794881 3102485 cri.go:89] found id: "2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:49:52.794886 3102485 cri.go:89] found id: ""
	I0923 12:49:52.794893 3102485 logs.go:276] 2 containers: [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef]
	I0923 12:49:52.794947 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.798877 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.802564 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:49:52.802683 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:49:52.882169 3102485 cri.go:89] found id: "3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:49:52.882242 3102485 cri.go:89] found id: "c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:49:52.882262 3102485 cri.go:89] found id: ""
	I0923 12:49:52.882286 3102485 logs.go:276] 2 containers: [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8]
	I0923 12:49:52.882372 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.886656 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.890389 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:49:52.890507 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:49:52.943880 3102485 cri.go:89] found id: "37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:49:52.943913 3102485 cri.go:89] found id: "7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:49:52.943918 3102485 cri.go:89] found id: ""
	I0923 12:49:52.943926 3102485 logs.go:276] 2 containers: [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af]
	I0923 12:49:52.943988 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.947650 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.951076 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:49:52.951161 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:49:52.988808 3102485 cri.go:89] found id: "a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:49:52.988830 3102485 cri.go:89] found id: "7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:49:52.988835 3102485 cri.go:89] found id: ""
	I0923 12:49:52.988848 3102485 logs.go:276] 2 containers: [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f]
	I0923 12:49:52.988909 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.992697 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.996136 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:49:52.996206 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:49:53.042311 3102485 cri.go:89] found id: "7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:49:53.042336 3102485 cri.go:89] found id: ""
	I0923 12:49:53.042345 3102485 logs.go:276] 1 containers: [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a]
	I0923 12:49:53.042421 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:53.046105 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:49:53.046208 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:49:53.088283 3102485 cri.go:89] found id: "036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:49:53.088309 3102485 cri.go:89] found id: "0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:49:53.088314 3102485 cri.go:89] found id: ""
	I0923 12:49:53.088321 3102485 logs.go:276] 2 containers: [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649]
	I0923 12:49:53.088398 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:53.092061 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:53.095472 3102485 logs.go:123] Gathering logs for kube-controller-manager [7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af] ...
	I0923 12:49:53.095536 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:49:53.152558 3102485 logs.go:123] Gathering logs for kube-apiserver [2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85] ...
	I0923 12:49:53.152592 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:49:53.214318 3102485 logs.go:123] Gathering logs for kube-scheduler [2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef] ...
	I0923 12:49:53.214352 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:49:53.256471 3102485 logs.go:123] Gathering logs for etcd [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a] ...
	I0923 12:49:53.256504 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:49:53.299642 3102485 logs.go:123] Gathering logs for coredns [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb] ...
	I0923 12:49:53.299671 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:49:53.341299 3102485 logs.go:123] Gathering logs for kubernetes-dashboard [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a] ...
	I0923 12:49:53.341331 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:49:53.382087 3102485 logs.go:123] Gathering logs for kubelet ...
	I0923 12:49:53.382166 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:49:53.438281 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989651     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.438539 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989920     658 reflector.go:138] object-"kube-system"/"coredns-token-ljzll": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ljzll" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.438760 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990004     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-267ss": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-267ss" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.438973 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990077     658 reflector.go:138] object-"kube-system"/"kindnet-token-np6vw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-np6vw" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.439173 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990159     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.439407 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990239     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xjwkr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xjwkr" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.439616 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990319     658 reflector.go:138] object-"default"/"default-token-cl5l9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cl5l9" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.441346 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:29 old-k8s-version-605045 kubelet[658]: E0923 12:44:29.145826     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7ztnm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7ztnm" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.449611 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.630570     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.450439 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.843250     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.453223 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:45 old-k8s-version-605045 kubelet[658]: E0923 12:44:45.501139     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.455318 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:53 old-k8s-version-605045 kubelet[658]: E0923 12:44:53.971600     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.455647 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:54 old-k8s-version-605045 kubelet[658]: E0923 12:44:54.975959     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.455973 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.411779     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.456158 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.466238     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.456946 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:04 old-k8s-version-605045 kubelet[658]: E0923 12:45:04.013052     658 pod_workers.go:191] Error syncing pod bc5f03f4-beaa-4a9b-a052-24d558f95beb ("storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"
	W0923 12:49:53.459838 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:09 old-k8s-version-605045 kubelet[658]: E0923 12:45:09.476299     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.460496 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:11 old-k8s-version-605045 kubelet[658]: E0923 12:45:11.038230     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.460830 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:16 old-k8s-version-605045 kubelet[658]: E0923 12:45:16.410049     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.461150 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:24 old-k8s-version-605045 kubelet[658]: E0923 12:45:24.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.461739 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:32 old-k8s-version-605045 kubelet[658]: E0923 12:45:32.109824     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.462067 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:36 old-k8s-version-605045 kubelet[658]: E0923 12:45:36.410320     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.462252 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:39 old-k8s-version-605045 kubelet[658]: E0923 12:45:39.471779     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.462581 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:51 old-k8s-version-605045 kubelet[658]: E0923 12:45:51.466132     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.465099 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:53 old-k8s-version-605045 kubelet[658]: E0923 12:45:53.479122     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.465442 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:02 old-k8s-version-605045 kubelet[658]: E0923 12:46:02.465540     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.465628 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:07 old-k8s-version-605045 kubelet[658]: E0923 12:46:07.466508     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.466225 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:17 old-k8s-version-605045 kubelet[658]: E0923 12:46:17.299572     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.466410 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:22 old-k8s-version-605045 kubelet[658]: E0923 12:46:22.465904     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.466735 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:26 old-k8s-version-605045 kubelet[658]: E0923 12:46:26.410045     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.466919 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:35 old-k8s-version-605045 kubelet[658]: E0923 12:46:35.469207     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.467245 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:38 old-k8s-version-605045 kubelet[658]: E0923 12:46:38.465574     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.467441 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:47 old-k8s-version-605045 kubelet[658]: E0923 12:46:47.467336     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.467773 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:52 old-k8s-version-605045 kubelet[658]: E0923 12:46:52.465530     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.467957 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:02 old-k8s-version-605045 kubelet[658]: E0923 12:47:02.465921     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.468283 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:07 old-k8s-version-605045 kubelet[658]: E0923 12:47:07.466739     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.471301 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:14 old-k8s-version-605045 kubelet[658]: E0923 12:47:14.473947     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.471640 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:22 old-k8s-version-605045 kubelet[658]: E0923 12:47:22.465965     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.471829 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:26 old-k8s-version-605045 kubelet[658]: E0923 12:47:26.465940     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.472156 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:33 old-k8s-version-605045 kubelet[658]: E0923 12:47:33.467362     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.472346 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:38 old-k8s-version-605045 kubelet[658]: E0923 12:47:38.465964     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.472930 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:45 old-k8s-version-605045 kubelet[658]: E0923 12:47:45.536822     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.473261 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:46 old-k8s-version-605045 kubelet[658]: E0923 12:47:46.541111     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.473453 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:49 old-k8s-version-605045 kubelet[658]: E0923 12:47:49.465875     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.473780 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:00 old-k8s-version-605045 kubelet[658]: E0923 12:48:00.479190     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.473963 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:02 old-k8s-version-605045 kubelet[658]: E0923 12:48:02.466076     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.474292 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:11 old-k8s-version-605045 kubelet[658]: E0923 12:48:11.465507     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.474477 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:16 old-k8s-version-605045 kubelet[658]: E0923 12:48:16.465938     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.474805 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:22 old-k8s-version-605045 kubelet[658]: E0923 12:48:22.465512     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.474992 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:30 old-k8s-version-605045 kubelet[658]: E0923 12:48:30.465840     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.475329 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:35 old-k8s-version-605045 kubelet[658]: E0923 12:48:35.474238     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.475519 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:44 old-k8s-version-605045 kubelet[658]: E0923 12:48:44.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.475848 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:49 old-k8s-version-605045 kubelet[658]: E0923 12:48:49.466027     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.476033 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:55 old-k8s-version-605045 kubelet[658]: E0923 12:48:55.466326     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.476361 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:04 old-k8s-version-605045 kubelet[658]: E0923 12:49:04.465573     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.476571 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:07 old-k8s-version-605045 kubelet[658]: E0923 12:49:07.466660     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.476901 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:16 old-k8s-version-605045 kubelet[658]: E0923 12:49:16.465577     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.477087 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:20 old-k8s-version-605045 kubelet[658]: E0923 12:49:20.466093     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.477419 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.477605 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.477931 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.478116 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.478441 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:49:53.478453 3102485 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:49:53.478468 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:49:53.626281 3102485 logs.go:123] Gathering logs for kube-scheduler [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89] ...
	I0923 12:49:53.626312 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:49:53.671782 3102485 logs.go:123] Gathering logs for kube-proxy [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb] ...
	I0923 12:49:53.671827 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:49:53.715920 3102485 logs.go:123] Gathering logs for kube-proxy [c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8] ...
	I0923 12:49:53.715952 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:49:53.773316 3102485 logs.go:123] Gathering logs for kindnet [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6] ...
	I0923 12:49:53.773355 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:49:53.824387 3102485 logs.go:123] Gathering logs for dmesg ...
	I0923 12:49:53.824420 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:49:53.844575 3102485 logs.go:123] Gathering logs for coredns [8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993] ...
	I0923 12:49:53.844609 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:49:53.896670 3102485 logs.go:123] Gathering logs for kube-controller-manager [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd] ...
	I0923 12:49:53.896699 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:49:53.954435 3102485 logs.go:123] Gathering logs for kindnet [7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f] ...
	I0923 12:49:53.954471 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:49:53.996651 3102485 logs.go:123] Gathering logs for storage-provisioner [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19] ...
	I0923 12:49:53.996678 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:49:54.043169 3102485 logs.go:123] Gathering logs for storage-provisioner [0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649] ...
	I0923 12:49:54.043207 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:49:54.091390 3102485 logs.go:123] Gathering logs for containerd ...
	I0923 12:49:54.091423 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:49:54.154040 3102485 logs.go:123] Gathering logs for container status ...
	I0923 12:49:54.154077 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:49:54.203612 3102485 logs.go:123] Gathering logs for kube-apiserver [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb] ...
	I0923 12:49:54.203644 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:49:54.262283 3102485 logs.go:123] Gathering logs for etcd [1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2] ...
	I0923 12:49:54.262317 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:49:54.308962 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:54.308990 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:49:54.309084 3102485 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 12:49:54.309098 3102485 out.go:270]   Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	  Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:54.309128 3102485 out.go:270]   Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:54.309153 3102485 out.go:270]   Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	  Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:54.309161 3102485 out.go:270]   Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:54.309172 3102485 out.go:270]   Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	  Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:49:54.309177 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:54.309191 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:50:04.310257 3102485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:50:04.324717 3102485 api_server.go:72] duration metric: took 5m54.581827s to wait for apiserver process to appear ...
	I0923 12:50:04.324741 3102485 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:50:04.324779 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:50:04.324837 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:50:04.371861 3102485 cri.go:89] found id: "b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:50:04.371888 3102485 cri.go:89] found id: "2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:50:04.371893 3102485 cri.go:89] found id: ""
	I0923 12:50:04.371900 3102485 logs.go:276] 2 containers: [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85]
	I0923 12:50:04.371957 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.376309 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.380595 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:50:04.380674 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:50:04.423923 3102485 cri.go:89] found id: "ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:50:04.423947 3102485 cri.go:89] found id: "1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:50:04.423952 3102485 cri.go:89] found id: ""
	I0923 12:50:04.423960 3102485 logs.go:276] 2 containers: [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2]
	I0923 12:50:04.424021 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.427792 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.431611 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:50:04.431688 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:50:04.472288 3102485 cri.go:89] found id: "0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:50:04.472310 3102485 cri.go:89] found id: "8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:50:04.472315 3102485 cri.go:89] found id: ""
	I0923 12:50:04.472322 3102485 logs.go:276] 2 containers: [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993]
	I0923 12:50:04.472380 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.476372 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.479861 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:50:04.479939 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:50:04.519484 3102485 cri.go:89] found id: "df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:50:04.519508 3102485 cri.go:89] found id: "2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:50:04.519513 3102485 cri.go:89] found id: ""
	I0923 12:50:04.519521 3102485 logs.go:276] 2 containers: [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef]
	I0923 12:50:04.519579 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.523443 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.527138 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:50:04.527233 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:50:04.571087 3102485 cri.go:89] found id: "3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:50:04.571107 3102485 cri.go:89] found id: "c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:50:04.571112 3102485 cri.go:89] found id: ""
	I0923 12:50:04.571119 3102485 logs.go:276] 2 containers: [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8]
	I0923 12:50:04.571176 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.575120 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.578668 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:50:04.578769 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:50:04.622151 3102485 cri.go:89] found id: "37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:50:04.622181 3102485 cri.go:89] found id: "7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:50:04.622186 3102485 cri.go:89] found id: ""
	I0923 12:50:04.622194 3102485 logs.go:276] 2 containers: [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af]
	I0923 12:50:04.622263 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.626117 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.629549 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:50:04.629624 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:50:04.669203 3102485 cri.go:89] found id: "a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:50:04.669277 3102485 cri.go:89] found id: "7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:50:04.669291 3102485 cri.go:89] found id: ""
	I0923 12:50:04.669299 3102485 logs.go:276] 2 containers: [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f]
	I0923 12:50:04.669417 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.673133 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.676629 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:50:04.676723 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:50:04.718915 3102485 cri.go:89] found id: "7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:50:04.718939 3102485 cri.go:89] found id: ""
	I0923 12:50:04.718947 3102485 logs.go:276] 1 containers: [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a]
	I0923 12:50:04.719031 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.723030 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:50:04.723104 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:50:04.773144 3102485 cri.go:89] found id: "036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:50:04.773225 3102485 cri.go:89] found id: "0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:50:04.773246 3102485 cri.go:89] found id: ""
	I0923 12:50:04.773280 3102485 logs.go:276] 2 containers: [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649]
	I0923 12:50:04.773393 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.777009 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.780384 3102485 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:50:04.780408 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:50:04.928411 3102485 logs.go:123] Gathering logs for etcd [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a] ...
	I0923 12:50:04.928490 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:50:04.972999 3102485 logs.go:123] Gathering logs for kube-scheduler [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89] ...
	I0923 12:50:04.973033 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:50:05.018662 3102485 logs.go:123] Gathering logs for kube-scheduler [2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef] ...
	I0923 12:50:05.018696 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:50:05.073546 3102485 logs.go:123] Gathering logs for kube-controller-manager [7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af] ...
	I0923 12:50:05.073652 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:50:05.135452 3102485 logs.go:123] Gathering logs for kindnet [7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f] ...
	I0923 12:50:05.135491 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:50:05.180530 3102485 logs.go:123] Gathering logs for kubelet ...
	I0923 12:50:05.180558 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:50:05.235647 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989651     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.235906 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989920     658 reflector.go:138] object-"kube-system"/"coredns-token-ljzll": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ljzll" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236127 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990004     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-267ss": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-267ss" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236340 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990077     658 reflector.go:138] object-"kube-system"/"kindnet-token-np6vw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-np6vw" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236541 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990159     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236769 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990239     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xjwkr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xjwkr" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236977 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990319     658 reflector.go:138] object-"default"/"default-token-cl5l9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cl5l9" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.238682 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:29 old-k8s-version-605045 kubelet[658]: E0923 12:44:29.145826     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7ztnm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7ztnm" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.246929 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.630570     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.247743 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.843250     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.250499 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:45 old-k8s-version-605045 kubelet[658]: E0923 12:44:45.501139     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.252595 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:53 old-k8s-version-605045 kubelet[658]: E0923 12:44:53.971600     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.252923 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:54 old-k8s-version-605045 kubelet[658]: E0923 12:44:54.975959     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.253253 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.411779     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.253439 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.466238     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.254205 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:04 old-k8s-version-605045 kubelet[658]: E0923 12:45:04.013052     658 pod_workers.go:191] Error syncing pod bc5f03f4-beaa-4a9b-a052-24d558f95beb ("storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"
	W0923 12:50:05.256971 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:09 old-k8s-version-605045 kubelet[658]: E0923 12:45:09.476299     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.257566 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:11 old-k8s-version-605045 kubelet[658]: E0923 12:45:11.038230     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.257891 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:16 old-k8s-version-605045 kubelet[658]: E0923 12:45:16.410049     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.258204 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:24 old-k8s-version-605045 kubelet[658]: E0923 12:45:24.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.258788 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:32 old-k8s-version-605045 kubelet[658]: E0923 12:45:32.109824     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.259112 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:36 old-k8s-version-605045 kubelet[658]: E0923 12:45:36.410320     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.259325 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:39 old-k8s-version-605045 kubelet[658]: E0923 12:45:39.471779     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.259652 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:51 old-k8s-version-605045 kubelet[658]: E0923 12:45:51.466132     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.262093 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:53 old-k8s-version-605045 kubelet[658]: E0923 12:45:53.479122     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.262418 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:02 old-k8s-version-605045 kubelet[658]: E0923 12:46:02.465540     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.262604 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:07 old-k8s-version-605045 kubelet[658]: E0923 12:46:07.466508     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.263188 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:17 old-k8s-version-605045 kubelet[658]: E0923 12:46:17.299572     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.263379 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:22 old-k8s-version-605045 kubelet[658]: E0923 12:46:22.465904     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.263705 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:26 old-k8s-version-605045 kubelet[658]: E0923 12:46:26.410045     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.263889 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:35 old-k8s-version-605045 kubelet[658]: E0923 12:46:35.469207     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.264212 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:38 old-k8s-version-605045 kubelet[658]: E0923 12:46:38.465574     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.264396 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:47 old-k8s-version-605045 kubelet[658]: E0923 12:46:47.467336     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.264721 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:52 old-k8s-version-605045 kubelet[658]: E0923 12:46:52.465530     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.264904 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:02 old-k8s-version-605045 kubelet[658]: E0923 12:47:02.465921     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.265242 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:07 old-k8s-version-605045 kubelet[658]: E0923 12:47:07.466739     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.267689 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:14 old-k8s-version-605045 kubelet[658]: E0923 12:47:14.473947     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.268018 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:22 old-k8s-version-605045 kubelet[658]: E0923 12:47:22.465965     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.268204 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:26 old-k8s-version-605045 kubelet[658]: E0923 12:47:26.465940     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.268528 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:33 old-k8s-version-605045 kubelet[658]: E0923 12:47:33.467362     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.268712 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:38 old-k8s-version-605045 kubelet[658]: E0923 12:47:38.465964     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.269300 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:45 old-k8s-version-605045 kubelet[658]: E0923 12:47:45.536822     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.269629 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:46 old-k8s-version-605045 kubelet[658]: E0923 12:47:46.541111     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.269813 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:49 old-k8s-version-605045 kubelet[658]: E0923 12:47:49.465875     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.270138 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:00 old-k8s-version-605045 kubelet[658]: E0923 12:48:00.479190     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.270323 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:02 old-k8s-version-605045 kubelet[658]: E0923 12:48:02.466076     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.270652 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:11 old-k8s-version-605045 kubelet[658]: E0923 12:48:11.465507     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.270835 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:16 old-k8s-version-605045 kubelet[658]: E0923 12:48:16.465938     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.271160 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:22 old-k8s-version-605045 kubelet[658]: E0923 12:48:22.465512     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.271360 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:30 old-k8s-version-605045 kubelet[658]: E0923 12:48:30.465840     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.271686 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:35 old-k8s-version-605045 kubelet[658]: E0923 12:48:35.474238     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.271870 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:44 old-k8s-version-605045 kubelet[658]: E0923 12:48:44.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.272196 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:49 old-k8s-version-605045 kubelet[658]: E0923 12:48:49.466027     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.272381 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:55 old-k8s-version-605045 kubelet[658]: E0923 12:48:55.466326     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.272706 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:04 old-k8s-version-605045 kubelet[658]: E0923 12:49:04.465573     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.272892 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:07 old-k8s-version-605045 kubelet[658]: E0923 12:49:07.466660     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.273218 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:16 old-k8s-version-605045 kubelet[658]: E0923 12:49:16.465577     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.273406 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:20 old-k8s-version-605045 kubelet[658]: E0923 12:49:20.466093     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.273731 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.273915 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.274240 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.274425 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.274751 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.277200 3102485 logs.go:138] Found kubelet problem: Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486035     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.277533 3102485 logs.go:138] Found kubelet problem: Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: E0923 12:50:04.465610     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:50:05.277544 3102485 logs.go:123] Gathering logs for etcd [1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2] ...
	I0923 12:50:05.277558 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:50:05.322579 3102485 logs.go:123] Gathering logs for coredns [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb] ...
	I0923 12:50:05.322658 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:50:05.365537 3102485 logs.go:123] Gathering logs for coredns [8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993] ...
	I0923 12:50:05.365569 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:50:05.407857 3102485 logs.go:123] Gathering logs for kube-proxy [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb] ...
	I0923 12:50:05.407925 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:50:05.456230 3102485 logs.go:123] Gathering logs for kube-proxy [c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8] ...
	I0923 12:50:05.456256 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:50:05.499217 3102485 logs.go:123] Gathering logs for kube-controller-manager [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd] ...
	I0923 12:50:05.499248 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:50:05.563528 3102485 logs.go:123] Gathering logs for kubernetes-dashboard [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a] ...
	I0923 12:50:05.563564 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:50:05.606773 3102485 logs.go:123] Gathering logs for kube-apiserver [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb] ...
	I0923 12:50:05.606803 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:50:05.679239 3102485 logs.go:123] Gathering logs for containerd ...
	I0923 12:50:05.679272 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:50:05.750829 3102485 logs.go:123] Gathering logs for storage-provisioner [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19] ...
	I0923 12:50:05.750867 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:50:05.792354 3102485 logs.go:123] Gathering logs for storage-provisioner [0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649] ...
	I0923 12:50:05.792381 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:50:05.836837 3102485 logs.go:123] Gathering logs for kindnet [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6] ...
	I0923 12:50:05.836865 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:50:05.905615 3102485 logs.go:123] Gathering logs for kube-apiserver [2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85] ...
	I0923 12:50:05.905647 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:50:05.963389 3102485 logs.go:123] Gathering logs for container status ...
	I0923 12:50:05.963426 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:50:06.020318 3102485 logs.go:123] Gathering logs for dmesg ...
	I0923 12:50:06.020356 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:50:06.038364 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:50:06.038396 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:50:06.038444 3102485 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 12:50:06.038457 3102485 out.go:270]   Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	  Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:06.038464 3102485 out.go:270]   Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:06.038477 3102485 out.go:270]   Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	  Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:06.038485 3102485 out.go:270]   Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486035     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	  Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486035     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:06.038496 3102485 out.go:270]   Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: E0923 12:50:04.465610     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	  Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: E0923 12:50:04.465610     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:50:06.038518 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:50:06.038531 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:50:16.039609 3102485 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0923 12:50:16.049801 3102485 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0923 12:50:16.052452 3102485 out.go:201] 
	W0923 12:50:16.054574 3102485 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 12:50:16.054647 3102485 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 12:50:16.054671 3102485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 12:50:16.054677 3102485 out.go:270] * 
	* 
	W0923 12:50:16.055931 3102485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:50:16.058110 3102485 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-605045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-605045
helpers_test.go:235: (dbg) docker inspect old-k8s-version-605045:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ebb5aa29eb89bc383c06bc55a271705a08442d385b07e836b899df05c91d95c1",
	        "Created": "2024-09-23T12:41:07.572363964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3102679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T12:44:01.556116217Z",
	            "FinishedAt": "2024-09-23T12:44:00.300083765Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/ebb5aa29eb89bc383c06bc55a271705a08442d385b07e836b899df05c91d95c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ebb5aa29eb89bc383c06bc55a271705a08442d385b07e836b899df05c91d95c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ebb5aa29eb89bc383c06bc55a271705a08442d385b07e836b899df05c91d95c1/hosts",
	        "LogPath": "/var/lib/docker/containers/ebb5aa29eb89bc383c06bc55a271705a08442d385b07e836b899df05c91d95c1/ebb5aa29eb89bc383c06bc55a271705a08442d385b07e836b899df05c91d95c1-json.log",
	        "Name": "/old-k8s-version-605045",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-605045:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-605045",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c61653332bf8039cb4cd414ec2430ed0d1647a22650a9129cc5a5c78eae05bfd-init/diff:/var/lib/docker/overlay2/e2b16ea68ee0680d6b3555ff1ad64b95e5f88f6159373a302ec4d54fa432d99a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c61653332bf8039cb4cd414ec2430ed0d1647a22650a9129cc5a5c78eae05bfd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c61653332bf8039cb4cd414ec2430ed0d1647a22650a9129cc5a5c78eae05bfd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c61653332bf8039cb4cd414ec2430ed0d1647a22650a9129cc5a5c78eae05bfd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-605045",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-605045/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-605045",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-605045",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-605045",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "145254c45e4791100c02ebcc9b9140ac306fabe564865155c0dd2c63ee48c97b",
	            "SandboxKey": "/var/run/docker/netns/145254c45e47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42087"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42088"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42091"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42089"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "42090"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-605045": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "54eadf17238d4d31d7e9b95d0cde7c8feef524ccd6ac4891d402b519e66c82ba",
	                    "EndpointID": "5cdb0495cf4e11acb64709afab5ebf328fa6b550f2bfd9464b981f6b0cdb2e45",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-605045",
	                        "ebb5aa29eb89"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-605045 -n old-k8s-version-605045
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-605045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-605045 logs -n 25: (2.958413214s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-492817                                       | cilium-492817             | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	| ssh     | force-systemd-flag-410328                              | force-systemd-flag-410328 | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| start   | -p force-systemd-env-461049                            | force-systemd-env-461049  | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:40 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-410328                           | force-systemd-flag-410328 | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:39 UTC |
	| start   | -p cert-expiration-130595                              | cert-expiration-130595    | jenkins | v1.34.0 | 23 Sep 24 12:39 UTC | 23 Sep 24 12:40 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-461049                               | force-systemd-env-461049  | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-461049                            | force-systemd-env-461049  | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	| start   | -p cert-options-007095                                 | cert-options-007095       | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-007095 ssh                                | cert-options-007095       | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-007095 -- sudo                         | cert-options-007095       | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:40 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-007095                                 | cert-options-007095       | jenkins | v1.34.0 | 23 Sep 24 12:40 UTC | 23 Sep 24 12:41 UTC |
	| start   | -p old-k8s-version-605045                              | old-k8s-version-605045    | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:43 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-130595                              | cert-expiration-130595    | jenkins | v1.34.0 | 23 Sep 24 12:43 UTC | 23 Sep 24 12:43 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-130595                              | cert-expiration-130595    | jenkins | v1.34.0 | 23 Sep 24 12:43 UTC | 23 Sep 24 12:43 UTC |
	| start   | -p no-preload-569169                                   | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:43 UTC | 23 Sep 24 12:44 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-605045        | old-k8s-version-605045    | jenkins | v1.34.0 | 23 Sep 24 12:43 UTC | 23 Sep 24 12:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-605045                              | old-k8s-version-605045    | jenkins | v1.34.0 | 23 Sep 24 12:43 UTC | 23 Sep 24 12:44 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-605045             | old-k8s-version-605045    | jenkins | v1.34.0 | 23 Sep 24 12:44 UTC | 23 Sep 24 12:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-605045                              | old-k8s-version-605045    | jenkins | v1.34.0 | 23 Sep 24 12:44 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-569169             | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:45 UTC | 23 Sep 24 12:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-569169                                   | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:45 UTC | 23 Sep 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-569169                  | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:45 UTC | 23 Sep 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-569169                                   | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:45 UTC | 23 Sep 24 12:50 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| image   | no-preload-569169 image list                           | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:50 UTC | 23 Sep 24 12:50 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-569169                                   | no-preload-569169         | jenkins | v1.34.0 | 23 Sep 24 12:50 UTC | 23 Sep 24 12:50 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:45:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:45:14.279586 3107459 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:45:14.279730 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:45:14.279742 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:45:14.279747 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:45:14.279978 3107459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:45:14.280327 3107459 out.go:352] Setting JSON to false
	I0923 12:45:14.281467 3107459 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":160062,"bootTime":1726935453,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 12:45:14.281537 3107459 start.go:139] virtualization:  
	I0923 12:45:14.283922 3107459 out.go:177] * [no-preload-569169] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 12:45:14.286502 3107459 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 12:45:14.286583 3107459 notify.go:220] Checking for updates...
	I0923 12:45:14.290067 3107459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:45:14.291728 3107459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:45:14.293160 3107459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 12:45:14.294611 3107459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 12:45:14.296419 3107459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:45:14.298822 3107459 config.go:182] Loaded profile config "no-preload-569169": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:45:14.299443 3107459 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:45:14.326338 3107459 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 12:45:14.326470 3107459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:45:14.379094 3107459 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 12:45:14.369137989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:45:14.379215 3107459 docker.go:318] overlay module found
	I0923 12:45:14.381182 3107459 out.go:177] * Using the docker driver based on existing profile
	I0923 12:45:14.383173 3107459 start.go:297] selected driver: docker
	I0923 12:45:14.383191 3107459 start.go:901] validating driver "docker" against &{Name:no-preload-569169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-569169 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:45:14.383402 3107459 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:45:14.384035 3107459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:45:14.447151 3107459 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 12:45:14.436437037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:45:14.447605 3107459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:45:14.447630 3107459 cni.go:84] Creating CNI manager for ""
	I0923 12:45:14.447680 3107459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 12:45:14.447722 3107459 start.go:340] cluster config:
	{Name:no-preload-569169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-569169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:45:14.450403 3107459 out.go:177] * Starting "no-preload-569169" primary control-plane node in "no-preload-569169" cluster
	I0923 12:45:14.452700 3107459 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 12:45:14.455464 3107459 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 12:45:14.457939 3107459 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 12:45:14.458027 3107459 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 12:45:14.458099 3107459 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/config.json ...
	I0923 12:45:14.458403 3107459 cache.go:107] acquiring lock: {Name:mke8d8830849e51322239aa8cc0a3c463bcc6371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.458480 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 12:45:14.458498 3107459 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.397µs
	I0923 12:45:14.458509 3107459 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 12:45:14.458535 3107459 cache.go:107] acquiring lock: {Name:mk380c7f0f446757cd4e5205bb6adf67fd3beaca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.458573 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 12:45:14.458583 3107459 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 58.314µs
	I0923 12:45:14.458589 3107459 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 12:45:14.458599 3107459 cache.go:107] acquiring lock: {Name:mk426dedfca3645ec0df888f1b65ce710f022963 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.458632 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 12:45:14.458703 3107459 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 103.523µs
	I0923 12:45:14.458720 3107459 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 12:45:14.458744 3107459 cache.go:107] acquiring lock: {Name:mk1a6719ed2eed12e30fe3565a32489c6e7f380e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.458788 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 12:45:14.458799 3107459 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 68.037µs
	I0923 12:45:14.458805 3107459 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 12:45:14.458831 3107459 cache.go:107] acquiring lock: {Name:mk77a731cf4156fe8fa23e8d48b5d6e3eaebdeb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.458881 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 12:45:14.458893 3107459 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 64.09µs
	I0923 12:45:14.458900 3107459 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 12:45:14.458917 3107459 cache.go:107] acquiring lock: {Name:mk8839f0ea7b3b217dec1b86ead05f417efd7de6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.458972 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 12:45:14.458991 3107459 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 69.883µs
	I0923 12:45:14.459013 3107459 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 12:45:14.459024 3107459 cache.go:107] acquiring lock: {Name:mk1f7b1ae4ef6e0a2df8534fa736ff06aaa6deb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.459059 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 12:45:14.459068 3107459 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 46.408µs
	I0923 12:45:14.459090 3107459 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 12:45:14.459099 3107459 cache.go:107] acquiring lock: {Name:mka73553c2e8e662311c860daefb5fd00644fe51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.459133 3107459 cache.go:115] /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 12:45:14.459142 3107459 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 44.414µs
	I0923 12:45:14.459164 3107459 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 12:45:14.459172 3107459 cache.go:87] Successfully saved all images to host disk.
	I0923 12:45:14.477893 3107459 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 12:45:14.477914 3107459 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 12:45:14.477927 3107459 cache.go:194] Successfully downloaded all kic artifacts
	I0923 12:45:14.477951 3107459 start.go:360] acquireMachinesLock for no-preload-569169: {Name:mkcc5cff1c73f085ff71e70bfa5e7e882d7b4ad1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:45:14.478003 3107459 start.go:364] duration metric: took 37.144µs to acquireMachinesLock for "no-preload-569169"
	I0923 12:45:14.478024 3107459 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:45:14.478029 3107459 fix.go:54] fixHost starting: 
	I0923 12:45:14.478325 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:14.504303 3107459 fix.go:112] recreateIfNeeded on no-preload-569169: state=Stopped err=<nil>
	W0923 12:45:14.504329 3107459 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:45:14.510080 3107459 out.go:177] * Restarting existing docker container for "no-preload-569169" ...
	I0923 12:45:11.493700 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:13.493962 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:15.494158 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:14.512866 3107459 cli_runner.go:164] Run: docker start no-preload-569169
	I0923 12:45:14.867922 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:14.886982 3107459 kic.go:430] container "no-preload-569169" state is running.
	I0923 12:45:14.887541 3107459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-569169
	I0923 12:45:14.911517 3107459 profile.go:143] Saving config to /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/config.json ...
	I0923 12:45:14.912256 3107459 machine.go:93] provisionDockerMachine start ...
	I0923 12:45:14.912404 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:14.933454 3107459 main.go:141] libmachine: Using SSH client type: native
	I0923 12:45:14.933722 3107459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 42092 <nil> <nil>}
	I0923 12:45:14.933732 3107459 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:45:14.934395 3107459 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60690->127.0.0.1:42092: read: connection reset by peer
	I0923 12:45:18.083493 3107459 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-569169
	
	I0923 12:45:18.083554 3107459 ubuntu.go:169] provisioning hostname "no-preload-569169"
	I0923 12:45:18.083624 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:18.109233 3107459 main.go:141] libmachine: Using SSH client type: native
	I0923 12:45:18.109539 3107459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 42092 <nil> <nil>}
	I0923 12:45:18.109564 3107459 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-569169 && echo "no-preload-569169" | sudo tee /etc/hostname
	I0923 12:45:18.263968 3107459 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-569169
	
	I0923 12:45:18.264085 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:18.282253 3107459 main.go:141] libmachine: Using SSH client type: native
	I0923 12:45:18.282503 3107459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 42092 <nil> <nil>}
	I0923 12:45:18.282526 3107459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-569169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-569169/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-569169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:45:18.415868 3107459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:45:18.415959 3107459 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19688-2897765/.minikube CaCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19688-2897765/.minikube}
	I0923 12:45:18.416008 3107459 ubuntu.go:177] setting up certificates
	I0923 12:45:18.416031 3107459 provision.go:84] configureAuth start
	I0923 12:45:18.416110 3107459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-569169
	I0923 12:45:18.433785 3107459 provision.go:143] copyHostCerts
	I0923 12:45:18.433862 3107459 exec_runner.go:144] found /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem, removing ...
	I0923 12:45:18.433881 3107459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem
	I0923 12:45:18.433961 3107459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.pem (1078 bytes)
	I0923 12:45:18.434060 3107459 exec_runner.go:144] found /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem, removing ...
	I0923 12:45:18.434065 3107459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem
	I0923 12:45:18.434093 3107459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/cert.pem (1123 bytes)
	I0923 12:45:18.434159 3107459 exec_runner.go:144] found /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem, removing ...
	I0923 12:45:18.434164 3107459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem
	I0923 12:45:18.434200 3107459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19688-2897765/.minikube/key.pem (1675 bytes)
	I0923 12:45:18.434251 3107459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem org=jenkins.no-preload-569169 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-569169]
	I0923 12:45:19.135742 3107459 provision.go:177] copyRemoteCerts
	I0923 12:45:19.135864 3107459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:45:19.135925 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:19.153237 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:19.248464 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:45:19.276297 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:45:19.303809 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 12:45:19.329412 3107459 provision.go:87] duration metric: took 913.351722ms to configureAuth
	I0923 12:45:19.329439 3107459 ubuntu.go:193] setting minikube options for container-runtime
	I0923 12:45:19.329640 3107459 config.go:182] Loaded profile config "no-preload-569169": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:45:19.329652 3107459 machine.go:96] duration metric: took 4.417310366s to provisionDockerMachine
	I0923 12:45:19.329659 3107459 start.go:293] postStartSetup for "no-preload-569169" (driver="docker")
	I0923 12:45:19.329675 3107459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:45:19.329730 3107459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:45:19.329771 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:19.346346 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:19.441054 3107459 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:45:19.444575 3107459 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 12:45:19.444628 3107459 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 12:45:19.444639 3107459 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 12:45:19.444650 3107459 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 12:45:19.444661 3107459 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/addons for local assets ...
	I0923 12:45:19.444725 3107459 filesync.go:126] Scanning /home/jenkins/minikube-integration/19688-2897765/.minikube/files for local assets ...
	I0923 12:45:19.444818 3107459 filesync.go:149] local asset: /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem -> 29031582.pem in /etc/ssl/certs
	I0923 12:45:19.444931 3107459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:45:19.453809 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem --> /etc/ssl/certs/29031582.pem (1708 bytes)
	I0923 12:45:19.489984 3107459 start.go:296] duration metric: took 160.303636ms for postStartSetup
	I0923 12:45:19.490079 3107459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:45:19.490122 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:19.507184 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:19.604936 3107459 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 12:45:19.609751 3107459 fix.go:56] duration metric: took 5.131712204s for fixHost
	I0923 12:45:19.609777 3107459 start.go:83] releasing machines lock for "no-preload-569169", held for 5.131765496s
	I0923 12:45:19.609849 3107459 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-569169
	I0923 12:45:19.627361 3107459 ssh_runner.go:195] Run: cat /version.json
	I0923 12:45:19.627400 3107459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:45:19.627414 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:19.627467 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:19.649101 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:19.656898 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:19.743341 3107459 ssh_runner.go:195] Run: systemctl --version
	I0923 12:45:19.884818 3107459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:45:19.889761 3107459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 12:45:19.908494 3107459 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 12:45:19.908615 3107459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:45:19.917562 3107459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 12:45:19.917586 3107459 start.go:495] detecting cgroup driver to use...
	I0923 12:45:19.917618 3107459 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 12:45:19.917667 3107459 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:45:19.933866 3107459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:45:19.946857 3107459 docker.go:217] disabling cri-docker service (if available) ...
	I0923 12:45:19.946930 3107459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 12:45:19.960380 3107459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 12:45:19.972393 3107459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 12:45:20.079364 3107459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 12:45:20.173106 3107459 docker.go:233] disabling docker service ...
	I0923 12:45:20.173224 3107459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 12:45:20.187194 3107459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 12:45:20.201026 3107459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 12:45:20.293491 3107459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 12:45:20.395249 3107459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 12:45:20.408100 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:45:20.425805 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:45:20.437494 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:45:20.449498 3107459 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:45:20.449575 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:45:20.460952 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:45:20.482169 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:45:20.497215 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:45:20.508225 3107459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:45:20.519905 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:45:20.530342 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:45:20.541807 3107459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:45:20.554060 3107459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:45:20.566266 3107459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:45:20.579855 3107459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:45:20.663349 3107459 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:45:20.855043 3107459 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0923 12:45:20.855160 3107459 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0923 12:45:20.860111 3107459 start.go:563] Will wait 60s for crictl version
	I0923 12:45:20.860220 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:45:20.864526 3107459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:45:20.913080 3107459 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0923 12:45:20.913197 3107459 ssh_runner.go:195] Run: containerd --version
	I0923 12:45:20.939693 3107459 ssh_runner.go:195] Run: containerd --version
	I0923 12:45:20.966429 3107459 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0923 12:45:17.496501 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:19.993060 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:20.968118 3107459 cli_runner.go:164] Run: docker network inspect no-preload-569169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:45:20.982821 3107459 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0923 12:45:20.986705 3107459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:45:20.999822 3107459 kubeadm.go:883] updating cluster {Name:no-preload-569169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-569169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:45:20.999983 3107459 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 12:45:21.000040 3107459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 12:45:21.054257 3107459 containerd.go:627] all images are preloaded for containerd runtime.
	I0923 12:45:21.054286 3107459 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:45:21.054295 3107459 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 containerd true true} ...
	I0923 12:45:21.054414 3107459 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-569169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-569169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:45:21.054490 3107459 ssh_runner.go:195] Run: sudo crictl info
	I0923 12:45:21.107715 3107459 cni.go:84] Creating CNI manager for ""
	I0923 12:45:21.107800 3107459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 12:45:21.107825 3107459 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:45:21.107882 3107459 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-569169 NodeName:no-preload-569169 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:45:21.108064 3107459 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-569169"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:45:21.108188 3107459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:45:21.118183 3107459 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:45:21.118255 3107459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:45:21.127573 3107459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0923 12:45:21.146854 3107459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:45:21.173164 3107459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0923 12:45:21.191833 3107459 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0923 12:45:21.195499 3107459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:45:21.206471 3107459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:45:21.299913 3107459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:45:21.320773 3107459 certs.go:68] Setting up /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169 for IP: 192.168.76.2
	I0923 12:45:21.320797 3107459 certs.go:194] generating shared ca certs ...
	I0923 12:45:21.320813 3107459 certs.go:226] acquiring lock for ca certs: {Name:mk3307686e47e832a4d12b60b03ff3c8ff918f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:45:21.321016 3107459 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key
	I0923 12:45:21.321082 3107459 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key
	I0923 12:45:21.321096 3107459 certs.go:256] generating profile certs ...
	I0923 12:45:21.321201 3107459 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.key
	I0923 12:45:21.321314 3107459 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/apiserver.key.f20ff4c7
	I0923 12:45:21.321383 3107459 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/proxy-client.key
	I0923 12:45:21.321517 3107459 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/2903158.pem (1338 bytes)
	W0923 12:45:21.321570 3107459 certs.go:480] ignoring /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/2903158_empty.pem, impossibly tiny 0 bytes
	I0923 12:45:21.321583 3107459 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 12:45:21.321612 3107459 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/ca.pem (1078 bytes)
	I0923 12:45:21.321666 3107459 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:45:21.321693 3107459 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/key.pem (1675 bytes)
	I0923 12:45:21.321758 3107459 certs.go:484] found cert: /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem (1708 bytes)
	I0923 12:45:21.322365 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:45:21.351987 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:45:21.380997 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:45:21.410261 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:45:21.446296 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:45:21.479764 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:45:21.535872 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:45:21.564884 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:45:21.595383 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/ssl/certs/29031582.pem --> /usr/share/ca-certificates/29031582.pem (1708 bytes)
	I0923 12:45:21.621833 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:45:21.650439 3107459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19688-2897765/.minikube/certs/2903158.pem --> /usr/share/ca-certificates/2903158.pem (1338 bytes)
	I0923 12:45:21.677384 3107459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:45:21.696174 3107459 ssh_runner.go:195] Run: openssl version
	I0923 12:45:21.703795 3107459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29031582.pem && ln -fs /usr/share/ca-certificates/29031582.pem /etc/ssl/certs/29031582.pem"
	I0923 12:45:21.714292 3107459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29031582.pem
	I0923 12:45:21.718508 3107459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:06 /usr/share/ca-certificates/29031582.pem
	I0923 12:45:21.718616 3107459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29031582.pem
	I0923 12:45:21.725809 3107459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29031582.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:45:21.735423 3107459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:45:21.745372 3107459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:45:21.748950 3107459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:45:21.749018 3107459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:45:21.756123 3107459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:45:21.765318 3107459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2903158.pem && ln -fs /usr/share/ca-certificates/2903158.pem /etc/ssl/certs/2903158.pem"
	I0923 12:45:21.775529 3107459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2903158.pem
	I0923 12:45:21.779212 3107459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:06 /usr/share/ca-certificates/2903158.pem
	I0923 12:45:21.779344 3107459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2903158.pem
	I0923 12:45:21.786846 3107459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2903158.pem /etc/ssl/certs/51391683.0"
	I0923 12:45:21.796253 3107459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:45:21.799984 3107459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 12:45:21.807162 3107459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 12:45:21.814551 3107459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 12:45:21.821875 3107459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 12:45:21.829075 3107459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 12:45:21.836389 3107459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 12:45:21.843452 3107459 kubeadm.go:392] StartCluster: {Name:no-preload-569169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-569169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:45:21.843550 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0923 12:45:21.843661 3107459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 12:45:21.900356 3107459 cri.go:89] found id: "fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:45:21.900377 3107459 cri.go:89] found id: "03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:45:21.900381 3107459 cri.go:89] found id: "fdc6f11b4366cd9b3ba19119f22e95b4fd66c8790667d019db30fb972746b36a"
	I0923 12:45:21.900414 3107459 cri.go:89] found id: "3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:45:21.900418 3107459 cri.go:89] found id: "5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:45:21.900423 3107459 cri.go:89] found id: "0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:45:21.900426 3107459 cri.go:89] found id: "96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:45:21.900430 3107459 cri.go:89] found id: "7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:45:21.900433 3107459 cri.go:89] found id: ""
	I0923 12:45:21.900485 3107459 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0923 12:45:21.923727 3107459 cri.go:116] JSON = null
	W0923 12:45:21.923777 3107459 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0923 12:45:21.923849 3107459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:45:21.942299 3107459 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 12:45:21.942317 3107459 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 12:45:21.942371 3107459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 12:45:21.954613 3107459 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:45:21.955208 3107459 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-569169" does not appear in /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:45:21.955483 3107459 kubeconfig.go:62] /home/jenkins/minikube-integration/19688-2897765/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-569169" cluster setting kubeconfig missing "no-preload-569169" context setting]
	I0923 12:45:21.955979 3107459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:45:21.957357 3107459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 12:45:21.971747 3107459 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0923 12:45:21.971778 3107459 kubeadm.go:597] duration metric: took 29.45461ms to restartPrimaryControlPlane
	I0923 12:45:21.971788 3107459 kubeadm.go:394] duration metric: took 128.354351ms to StartCluster
	I0923 12:45:21.971803 3107459 settings.go:142] acquiring lock: {Name:mk4415211fc0f47c243959f36c7d2f9eeca37653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:45:21.971858 3107459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:45:21.972786 3107459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19688-2897765/kubeconfig: {Name:mkc814324ebd7e6787446f1c0db099ab6daa7ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:45:21.973019 3107459 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0923 12:45:21.973337 3107459 config.go:182] Loaded profile config "no-preload-569169": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:45:21.973401 3107459 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:45:21.973520 3107459 addons.go:69] Setting storage-provisioner=true in profile "no-preload-569169"
	I0923 12:45:21.973538 3107459 addons.go:234] Setting addon storage-provisioner=true in "no-preload-569169"
	W0923 12:45:21.973549 3107459 addons.go:243] addon storage-provisioner should already be in state true
	I0923 12:45:21.973544 3107459 addons.go:69] Setting default-storageclass=true in profile "no-preload-569169"
	I0923 12:45:21.973614 3107459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-569169"
	I0923 12:45:21.973571 3107459 host.go:66] Checking if "no-preload-569169" exists ...
	I0923 12:45:21.974057 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:21.974169 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:21.973576 3107459 addons.go:69] Setting metrics-server=true in profile "no-preload-569169"
	I0923 12:45:21.974805 3107459 addons.go:234] Setting addon metrics-server=true in "no-preload-569169"
	W0923 12:45:21.974822 3107459 addons.go:243] addon metrics-server should already be in state true
	I0923 12:45:21.974846 3107459 host.go:66] Checking if "no-preload-569169" exists ...
	I0923 12:45:21.975273 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:21.973580 3107459 addons.go:69] Setting dashboard=true in profile "no-preload-569169"
	I0923 12:45:21.978113 3107459 addons.go:234] Setting addon dashboard=true in "no-preload-569169"
	W0923 12:45:21.978126 3107459 addons.go:243] addon dashboard should already be in state true
	I0923 12:45:21.978163 3107459 host.go:66] Checking if "no-preload-569169" exists ...
	I0923 12:45:21.979132 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:21.982101 3107459 out.go:177] * Verifying Kubernetes components...
	I0923 12:45:21.990325 3107459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:45:22.039148 3107459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0923 12:45:22.041094 3107459 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 12:45:22.041120 3107459 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 12:45:22.041194 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:22.050937 3107459 addons.go:234] Setting addon default-storageclass=true in "no-preload-569169"
	W0923 12:45:22.050959 3107459 addons.go:243] addon default-storageclass should already be in state true
	I0923 12:45:22.050985 3107459 host.go:66] Checking if "no-preload-569169" exists ...
	I0923 12:45:22.052443 3107459 cli_runner.go:164] Run: docker container inspect no-preload-569169 --format={{.State.Status}}
	I0923 12:45:22.057874 3107459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:45:22.060826 3107459 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:45:22.060847 3107459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:45:22.060917 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:22.071174 3107459 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0923 12:45:22.076481 3107459 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0923 12:45:22.081186 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0923 12:45:22.081225 3107459 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0923 12:45:22.081308 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:22.125948 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:22.136772 3107459 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:45:22.136794 3107459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:45:22.136884 3107459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-569169
	I0923 12:45:22.137133 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:22.166748 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:22.175175 3107459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:42092 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/no-preload-569169/id_rsa Username:docker}
	I0923 12:45:22.238144 3107459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:45:22.338501 3107459 node_ready.go:35] waiting up to 6m0s for node "no-preload-569169" to be "Ready" ...
	I0923 12:45:22.409787 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0923 12:45:22.409813 3107459 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0923 12:45:22.439002 3107459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:45:22.457318 3107459 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 12:45:22.457343 3107459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0923 12:45:22.486312 3107459 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 12:45:22.486338 3107459 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 12:45:22.522746 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0923 12:45:22.522770 3107459 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0923 12:45:22.537903 3107459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:45:22.614247 3107459 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:45:22.614271 3107459 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 12:45:22.644539 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0923 12:45:22.644566 3107459 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0923 12:45:22.822272 3107459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 12:45:22.950641 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0923 12:45:22.950668 3107459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0923 12:45:23.050324 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0923 12:45:23.050353 3107459 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0923 12:45:23.093533 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0923 12:45:23.093560 3107459 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0923 12:45:23.142225 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0923 12:45:23.142251 3107459 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0923 12:45:23.183302 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0923 12:45:23.183328 3107459 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0923 12:45:23.227243 3107459 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:45:23.227270 3107459 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0923 12:45:23.294294 3107459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 12:45:22.015003 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:24.539397 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:26.972744 3107459 node_ready.go:49] node "no-preload-569169" has status "Ready":"True"
	I0923 12:45:26.972768 3107459 node_ready.go:38] duration metric: took 4.634234049s for node "no-preload-569169" to be "Ready" ...
	I0923 12:45:26.972777 3107459 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:45:27.085464 3107459 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5klx" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.114218 3107459 pod_ready.go:93] pod "coredns-7c65d6cfc9-r5klx" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:27.114245 3107459 pod_ready.go:82] duration metric: took 27.731484ms for pod "coredns-7c65d6cfc9-r5klx" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.114256 3107459 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.127497 3107459 pod_ready.go:93] pod "etcd-no-preload-569169" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:27.127524 3107459 pod_ready.go:82] duration metric: took 13.259935ms for pod "etcd-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.127540 3107459 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.140415 3107459 pod_ready.go:93] pod "kube-apiserver-no-preload-569169" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:27.140492 3107459 pod_ready.go:82] duration metric: took 12.942857ms for pod "kube-apiserver-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.140529 3107459 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.169025 3107459 pod_ready.go:93] pod "kube-controller-manager-no-preload-569169" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:27.169051 3107459 pod_ready.go:82] duration metric: took 28.500768ms for pod "kube-controller-manager-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.169062 3107459 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bcttj" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.193813 3107459 pod_ready.go:93] pod "kube-proxy-bcttj" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:27.193840 3107459 pod_ready.go:82] duration metric: took 24.770652ms for pod "kube-proxy-bcttj" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.193852 3107459 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.578006 3107459 pod_ready.go:93] pod "kube-scheduler-no-preload-569169" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:27.578098 3107459 pod_ready.go:82] duration metric: took 384.236462ms for pod "kube-scheduler-no-preload-569169" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:27.578125 3107459 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:29.585927 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:30.141315 3107459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.702260218s)
	I0923 12:45:30.141393 3107459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.603467052s)
	W0923 12:45:30.141417 3107459 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 12:45:30.141444 3107459 retry.go:31] will retry after 300.93999ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 12:45:30.141519 3107459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.319217775s)
	I0923 12:45:30.141537 3107459 addons.go:475] Verifying addon metrics-server=true in "no-preload-569169"
	I0923 12:45:30.263219 3107459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.968877355s)
	I0923 12:45:30.265331 3107459 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-569169 addons enable metrics-server
	
	I0923 12:45:30.443563 3107459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:45:30.647981 3107459 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0923 12:45:26.994612 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:29.497184 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:30.649743 3107459 addons.go:510] duration metric: took 8.676330766s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0923 12:45:32.084527 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:34.100914 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:31.501408 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:33.994220 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:36.584318 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:38.585239 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:36.493434 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:38.494618 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:40.993697 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:41.085910 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:43.584914 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:42.993796 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:45.494023 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:46.084222 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:48.084874 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:47.993809 3102485 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:49.493816 3102485 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:49.493848 3102485 pod_ready.go:82] duration metric: took 1m20.006846553s for pod "kube-controller-manager-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:49.493861 3102485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9m2fb" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:49.504701 3102485 pod_ready.go:93] pod "kube-proxy-9m2fb" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:49.504776 3102485 pod_ready.go:82] duration metric: took 10.905739ms for pod "kube-proxy-9m2fb" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:49.504803 3102485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:50.584540 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:52.585422 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:51.512244 3102485 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:52.510949 3102485 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace has status "Ready":"True"
	I0923 12:45:52.510976 3102485 pod_ready.go:82] duration metric: took 3.006151928s for pod "kube-scheduler-old-k8s-version-605045" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:52.510987 3102485 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace to be "Ready" ...
	I0923 12:45:54.517408 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:55.084943 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:57.085244 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:56.517938 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:59.018038 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:45:59.584727 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:01.584765 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:03.585809 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:01.521178 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:04.017169 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:06.018012 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:06.084366 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:08.085103 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:08.020342 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:10.050466 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:10.085697 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:12.584597 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:12.519575 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:15.028066 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:14.585240 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:17.084818 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:19.084978 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:17.516764 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:19.517985 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:21.584916 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:23.585393 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:21.527598 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:24.018262 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:26.084069 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:28.084871 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:26.516738 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:28.517429 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:30.517488 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:30.085891 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:32.584266 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:32.517691 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:34.518155 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:35.084662 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:37.584826 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:37.019755 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:39.517509 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:39.584893 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:42.085654 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:41.517906 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:44.018200 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:44.583918 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:46.584188 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:48.585500 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:46.518620 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:49.017532 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:51.085557 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:53.584515 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:51.517271 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:53.518286 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:55.518511 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:55.585029 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:57.590044 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:46:58.018393 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:00.040850 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:00.097324 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:02.584655 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:02.519588 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:05.017851 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:05.085283 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:07.584458 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:07.517609 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:10.120171 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:09.584697 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:11.585432 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:14.084057 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:12.516830 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:14.518809 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:16.084844 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:18.086244 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:17.017796 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:19.517358 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:20.584059 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:23.084440 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:21.521109 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:24.017997 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:25.085483 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:27.585336 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:26.517391 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:28.518031 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:31.017891 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:30.086596 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:32.584286 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:33.022935 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:35.025532 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:34.584798 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:37.085034 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:37.517416 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:39.518031 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:39.584643 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:41.584767 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:43.584813 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:41.527952 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:44.017386 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:46.017790 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:45.584955 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:48.085594 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:48.018509 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:50.019200 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:50.085838 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:52.585736 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:52.517046 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:54.519238 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:55.084374 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:57.084529 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:59.084598 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:57.017644 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:47:59.018465 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:01.585010 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:04.084378 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:01.517860 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:04.020306 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:06.584197 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:08.584797 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:06.518007 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:08.518217 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:10.518374 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:11.084766 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:13.583962 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:13.017127 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:15.025052 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:16.084116 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:18.085614 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:17.518912 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:19.519575 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:20.584085 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:22.584160 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:22.017866 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:24.519753 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:25.084739 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:27.591949 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:27.018096 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:29.517084 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:30.095520 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:32.583821 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:31.518247 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:34.017560 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:36.018135 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:34.584868 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:37.085221 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:39.085432 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:38.018563 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:40.033260 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:41.584139 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:43.584493 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:42.516878 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:44.518028 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:46.084462 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:48.084648 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:47.017558 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:49.017887 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:50.084902 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:52.584596 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:51.104384 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:53.518375 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:56.017073 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:54.585350 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:57.084590 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:58.018046 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:00.051557 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:48:59.584184 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:01.584554 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:03.584783 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:02.518428 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:05.018309 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:06.086357 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:08.586028 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:07.517421 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:09.517540 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:11.084457 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:13.584204 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:12.017948 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:14.518324 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:15.584669 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:18.085144 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:16.518725 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:19.017071 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:21.018952 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:20.085290 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:22.584914 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:23.517806 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:26.017683 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:25.085050 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:27.585265 3107459 pod_ready.go:103] pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:27.585313 3107459 pod_ready.go:82] duration metric: took 4m0.007165977s for pod "metrics-server-6867b74b74-5twtp" in "kube-system" namespace to be "Ready" ...
	E0923 12:49:27.585324 3107459 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 12:49:27.585333 3107459 pod_ready.go:39] duration metric: took 4m0.612545092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:49:27.585360 3107459 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:49:27.585395 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:49:27.585457 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:49:27.631650 3107459 cri.go:89] found id: "9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437"
	I0923 12:49:27.631717 3107459 cri.go:89] found id: "7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:49:27.631729 3107459 cri.go:89] found id: ""
	I0923 12:49:27.631737 3107459 logs.go:276] 2 containers: [9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437 7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4]
	I0923 12:49:27.631795 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.636334 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.640122 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:49:27.640250 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:49:27.680878 3107459 cri.go:89] found id: "177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982"
	I0923 12:49:27.680905 3107459 cri.go:89] found id: "0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:49:27.680910 3107459 cri.go:89] found id: ""
	I0923 12:49:27.680917 3107459 logs.go:276] 2 containers: [177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982 0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a]
	I0923 12:49:27.680997 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.684773 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.689238 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:49:27.689315 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:49:27.731159 3107459 cri.go:89] found id: "116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6"
	I0923 12:49:27.731232 3107459 cri.go:89] found id: "fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:49:27.731251 3107459 cri.go:89] found id: ""
	I0923 12:49:27.731274 3107459 logs.go:276] 2 containers: [116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6 fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a]
	I0923 12:49:27.731412 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.735411 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.738996 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:49:27.739078 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:49:27.784507 3107459 cri.go:89] found id: "8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1"
	I0923 12:49:27.784603 3107459 cri.go:89] found id: "5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:49:27.784636 3107459 cri.go:89] found id: ""
	I0923 12:49:27.784679 3107459 logs.go:276] 2 containers: [8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1 5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c]
	I0923 12:49:27.784787 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.789657 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.793738 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:49:27.793838 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:49:27.836366 3107459 cri.go:89] found id: "eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1"
	I0923 12:49:27.836392 3107459 cri.go:89] found id: "3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:49:27.836409 3107459 cri.go:89] found id: ""
	I0923 12:49:27.836417 3107459 logs.go:276] 2 containers: [eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1 3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c]
	I0923 12:49:27.836476 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.840524 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.848358 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:49:27.848461 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:49:27.893710 3107459 cri.go:89] found id: "267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd"
	I0923 12:49:27.893736 3107459 cri.go:89] found id: "96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:49:27.893741 3107459 cri.go:89] found id: ""
	I0923 12:49:27.893749 3107459 logs.go:276] 2 containers: [267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd 96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481]
	I0923 12:49:27.893863 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.897696 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.901525 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:49:27.901599 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:49:27.942029 3107459 cri.go:89] found id: "a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2"
	I0923 12:49:27.942049 3107459 cri.go:89] found id: "03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:49:27.942054 3107459 cri.go:89] found id: ""
	I0923 12:49:27.942061 3107459 logs.go:276] 2 containers: [a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2 03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45]
	I0923 12:49:27.942120 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.946050 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.949585 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:49:27.949666 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:49:27.991564 3107459 cri.go:89] found id: "64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6"
	I0923 12:49:27.991588 3107459 cri.go:89] found id: "3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef"
	I0923 12:49:27.991593 3107459 cri.go:89] found id: ""
	I0923 12:49:27.991601 3107459 logs.go:276] 2 containers: [64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6 3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef]
	I0923 12:49:27.991658 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.995383 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:27.998764 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:49:27.998874 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:49:28.050024 3107459 cri.go:89] found id: "c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776"
	I0923 12:49:28.050049 3107459 cri.go:89] found id: ""
	I0923 12:49:28.050057 3107459 logs.go:276] 1 containers: [c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776]
	I0923 12:49:28.050115 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:28.054040 3107459 logs.go:123] Gathering logs for coredns [116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6] ...
	I0923 12:49:28.054079 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6"
	I0923 12:49:28.110088 3107459 logs.go:123] Gathering logs for kube-scheduler [5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c] ...
	I0923 12:49:28.110173 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:49:28.160124 3107459 logs.go:123] Gathering logs for storage-provisioner [64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6] ...
	I0923 12:49:28.160155 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6"
	I0923 12:49:28.198211 3107459 logs.go:123] Gathering logs for container status ...
	I0923 12:49:28.198244 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:49:28.257585 3107459 logs.go:123] Gathering logs for kube-scheduler [8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1] ...
	I0923 12:49:28.257614 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1"
	I0923 12:49:28.296724 3107459 logs.go:123] Gathering logs for kube-proxy [3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c] ...
	I0923 12:49:28.296752 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:49:28.342932 3107459 logs.go:123] Gathering logs for storage-provisioner [3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef] ...
	I0923 12:49:28.342961 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef"
	I0923 12:49:28.387395 3107459 logs.go:123] Gathering logs for kubernetes-dashboard [c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776] ...
	I0923 12:49:28.387425 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776"
	I0923 12:49:28.434998 3107459 logs.go:123] Gathering logs for kubelet ...
	I0923 12:49:28.435025 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:49:28.484876 3107459 logs.go:138] Found kubelet problem: Sep 23 12:45:31 no-preload-569169 kubelet[661]: W0923 12:45:31.106205     661 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-569169" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-569169' and this object
	W0923 12:49:28.485133 3107459 logs.go:138] Found kubelet problem: Sep 23 12:45:31 no-preload-569169 kubelet[661]: E0923 12:45:31.106723     661 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-569169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-569169' and this object" logger="UnhandledError"
	I0923 12:49:28.516944 3107459 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:49:28.516981 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:49:28.668444 3107459 logs.go:123] Gathering logs for kube-apiserver [7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4] ...
	I0923 12:49:28.668475 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:49:28.725090 3107459 logs.go:123] Gathering logs for coredns [fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a] ...
	I0923 12:49:28.725129 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:49:28.766347 3107459 logs.go:123] Gathering logs for dmesg ...
	I0923 12:49:28.766417 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:49:28.784993 3107459 logs.go:123] Gathering logs for kube-apiserver [9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437] ...
	I0923 12:49:28.785070 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437"
	I0923 12:49:28.837937 3107459 logs.go:123] Gathering logs for kube-controller-manager [267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd] ...
	I0923 12:49:28.837968 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd"
	I0923 12:49:28.906996 3107459 logs.go:123] Gathering logs for containerd ...
	I0923 12:49:28.907032 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:49:28.975043 3107459 logs.go:123] Gathering logs for kindnet [a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2] ...
	I0923 12:49:28.975083 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2"
	I0923 12:49:29.028562 3107459 logs.go:123] Gathering logs for kindnet [03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45] ...
	I0923 12:49:29.028598 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:49:29.070915 3107459 logs.go:123] Gathering logs for etcd [177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982] ...
	I0923 12:49:29.070947 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982"
	I0923 12:49:29.120434 3107459 logs.go:123] Gathering logs for etcd [0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a] ...
	I0923 12:49:29.120467 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:49:29.175630 3107459 logs.go:123] Gathering logs for kube-proxy [eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1] ...
	I0923 12:49:29.175662 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1"
	I0923 12:49:29.222468 3107459 logs.go:123] Gathering logs for kube-controller-manager [96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481] ...
	I0923 12:49:29.222500 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:49:28.018157 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:30.030456 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:29.282026 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:29.282058 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:49:29.282127 3107459 out.go:270] X Problems detected in kubelet:
	W0923 12:49:29.282144 3107459 out.go:270]   Sep 23 12:45:31 no-preload-569169 kubelet[661]: W0923 12:45:31.106205     661 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-569169" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-569169' and this object
	W0923 12:49:29.282155 3107459 out.go:270]   Sep 23 12:45:31 no-preload-569169 kubelet[661]: E0923 12:45:31.106723     661 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-569169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-569169' and this object" logger="UnhandledError"
	I0923 12:49:29.282172 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:29.282179 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:49:32.517329 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:34.517424 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:36.517606 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:39.017532 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:39.284016 3107459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:49:39.296398 3107459 api_server.go:72] duration metric: took 4m17.32334127s to wait for apiserver process to appear ...
	I0923 12:49:39.296424 3107459 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:49:39.296461 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:49:39.296520 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:49:39.348939 3107459 cri.go:89] found id: "9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437"
	I0923 12:49:39.348960 3107459 cri.go:89] found id: "7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:49:39.348965 3107459 cri.go:89] found id: ""
	I0923 12:49:39.348973 3107459 logs.go:276] 2 containers: [9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437 7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4]
	I0923 12:49:39.349034 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.352826 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.356518 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:49:39.356606 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:49:39.397435 3107459 cri.go:89] found id: "177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982"
	I0923 12:49:39.397470 3107459 cri.go:89] found id: "0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:49:39.397476 3107459 cri.go:89] found id: ""
	I0923 12:49:39.397483 3107459 logs.go:276] 2 containers: [177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982 0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a]
	I0923 12:49:39.397542 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.401464 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.405171 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:49:39.405245 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:49:39.443075 3107459 cri.go:89] found id: "116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6"
	I0923 12:49:39.443095 3107459 cri.go:89] found id: "fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:49:39.443101 3107459 cri.go:89] found id: ""
	I0923 12:49:39.443107 3107459 logs.go:276] 2 containers: [116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6 fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a]
	I0923 12:49:39.443181 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.447015 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.451533 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:49:39.451656 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:49:39.493693 3107459 cri.go:89] found id: "8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1"
	I0923 12:49:39.493717 3107459 cri.go:89] found id: "5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:49:39.493723 3107459 cri.go:89] found id: ""
	I0923 12:49:39.493730 3107459 logs.go:276] 2 containers: [8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1 5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c]
	I0923 12:49:39.493790 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.497518 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.501418 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:49:39.501495 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:49:39.541972 3107459 cri.go:89] found id: "eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1"
	I0923 12:49:39.542033 3107459 cri.go:89] found id: "3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:49:39.542065 3107459 cri.go:89] found id: ""
	I0923 12:49:39.542092 3107459 logs.go:276] 2 containers: [eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1 3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c]
	I0923 12:49:39.542179 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.546139 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.549599 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:49:39.549677 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:49:39.596247 3107459 cri.go:89] found id: "267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd"
	I0923 12:49:39.596325 3107459 cri.go:89] found id: "96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:49:39.596358 3107459 cri.go:89] found id: ""
	I0923 12:49:39.596379 3107459 logs.go:276] 2 containers: [267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd 96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481]
	I0923 12:49:39.596465 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.602259 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.606222 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:49:39.606301 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:49:39.646231 3107459 cri.go:89] found id: "a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2"
	I0923 12:49:39.646255 3107459 cri.go:89] found id: "03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:49:39.646260 3107459 cri.go:89] found id: ""
	I0923 12:49:39.646267 3107459 logs.go:276] 2 containers: [a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2 03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45]
	I0923 12:49:39.646325 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.650273 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.655071 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:49:39.655144 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:49:39.728569 3107459 cri.go:89] found id: "c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776"
	I0923 12:49:39.728632 3107459 cri.go:89] found id: ""
	I0923 12:49:39.728665 3107459 logs.go:276] 1 containers: [c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776]
	I0923 12:49:39.728759 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.734052 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:49:39.734134 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:49:39.775492 3107459 cri.go:89] found id: "64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6"
	I0923 12:49:39.775517 3107459 cri.go:89] found id: "3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef"
	I0923 12:49:39.775523 3107459 cri.go:89] found id: ""
	I0923 12:49:39.775530 3107459 logs.go:276] 2 containers: [64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6 3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef]
	I0923 12:49:39.775587 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.779214 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:39.783033 3107459 logs.go:123] Gathering logs for kube-apiserver [7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4] ...
	I0923 12:49:39.783057 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:49:39.842820 3107459 logs.go:123] Gathering logs for kindnet [a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2] ...
	I0923 12:49:39.843045 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2"
	I0923 12:49:39.898383 3107459 logs.go:123] Gathering logs for storage-provisioner [64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6] ...
	I0923 12:49:39.898413 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6"
	I0923 12:49:39.936886 3107459 logs.go:123] Gathering logs for coredns [116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6] ...
	I0923 12:49:39.936918 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6"
	I0923 12:49:39.978765 3107459 logs.go:123] Gathering logs for kube-controller-manager [96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481] ...
	I0923 12:49:39.978795 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:49:40.068541 3107459 logs.go:123] Gathering logs for kubelet ...
	I0923 12:49:40.068581 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:49:40.125340 3107459 logs.go:138] Found kubelet problem: Sep 23 12:45:31 no-preload-569169 kubelet[661]: W0923 12:45:31.106205     661 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-569169" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-569169' and this object
	W0923 12:49:40.125637 3107459 logs.go:138] Found kubelet problem: Sep 23 12:45:31 no-preload-569169 kubelet[661]: E0923 12:45:31.106723     661 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-569169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-569169' and this object" logger="UnhandledError"
	I0923 12:49:40.157780 3107459 logs.go:123] Gathering logs for kube-apiserver [9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437] ...
	I0923 12:49:40.157821 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437"
	I0923 12:49:40.231842 3107459 logs.go:123] Gathering logs for etcd [0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a] ...
	I0923 12:49:40.231870 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:49:40.285582 3107459 logs.go:123] Gathering logs for kube-proxy [eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1] ...
	I0923 12:49:40.285622 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1"
	I0923 12:49:40.326206 3107459 logs.go:123] Gathering logs for dmesg ...
	I0923 12:49:40.326232 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:49:40.342867 3107459 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:49:40.342897 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:49:40.472567 3107459 logs.go:123] Gathering logs for coredns [fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a] ...
	I0923 12:49:40.472600 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:49:40.520756 3107459 logs.go:123] Gathering logs for kube-proxy [3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c] ...
	I0923 12:49:40.520786 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:49:40.569001 3107459 logs.go:123] Gathering logs for kube-controller-manager [267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd] ...
	I0923 12:49:40.569069 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd"
	I0923 12:49:40.632798 3107459 logs.go:123] Gathering logs for kindnet [03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45] ...
	I0923 12:49:40.632838 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:49:40.678473 3107459 logs.go:123] Gathering logs for kubernetes-dashboard [c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776] ...
	I0923 12:49:40.678503 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776"
	I0923 12:49:40.726132 3107459 logs.go:123] Gathering logs for storage-provisioner [3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef] ...
	I0923 12:49:40.726160 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef"
	I0923 12:49:40.769230 3107459 logs.go:123] Gathering logs for etcd [177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982] ...
	I0923 12:49:40.769262 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982"
	I0923 12:49:40.819328 3107459 logs.go:123] Gathering logs for kube-scheduler [8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1] ...
	I0923 12:49:40.819451 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1"
	I0923 12:49:40.873664 3107459 logs.go:123] Gathering logs for kube-scheduler [5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c] ...
	I0923 12:49:40.873695 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:49:40.922111 3107459 logs.go:123] Gathering logs for containerd ...
	I0923 12:49:40.922151 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:49:40.985502 3107459 logs.go:123] Gathering logs for container status ...
	I0923 12:49:40.985537 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:49:41.036178 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:41.036206 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:49:41.036289 3107459 out.go:270] X Problems detected in kubelet:
	W0923 12:49:41.036306 3107459 out.go:270]   Sep 23 12:45:31 no-preload-569169 kubelet[661]: W0923 12:45:31.106205     661 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-569169" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-569169' and this object
	W0923 12:49:41.036314 3107459 out.go:270]   Sep 23 12:45:31 no-preload-569169 kubelet[661]: E0923 12:45:31.106723     661 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-569169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-569169' and this object" logger="UnhandledError"
	I0923 12:49:41.036350 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:41.036368 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:49:41.516456 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:43.522639 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:46.019113 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:48.022948 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:50.517786 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:51.037685 3107459 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0923 12:49:51.046420 3107459 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0923 12:49:51.047587 3107459 api_server.go:141] control plane version: v1.31.1
	I0923 12:49:51.047617 3107459 api_server.go:131] duration metric: took 11.751185788s to wait for apiserver health ...
	I0923 12:49:51.047627 3107459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:49:51.047649 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:49:51.047723 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:49:51.088127 3107459 cri.go:89] found id: "9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437"
	I0923 12:49:51.088159 3107459 cri.go:89] found id: "7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:49:51.088169 3107459 cri.go:89] found id: ""
	I0923 12:49:51.088180 3107459 logs.go:276] 2 containers: [9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437 7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4]
	I0923 12:49:51.088249 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.093661 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.097584 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:49:51.097691 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:49:51.137926 3107459 cri.go:89] found id: "177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982"
	I0923 12:49:51.137948 3107459 cri.go:89] found id: "0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:49:51.137953 3107459 cri.go:89] found id: ""
	I0923 12:49:51.137961 3107459 logs.go:276] 2 containers: [177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982 0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a]
	I0923 12:49:51.138044 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.142029 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.145999 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:49:51.146078 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:49:51.194598 3107459 cri.go:89] found id: "116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6"
	I0923 12:49:51.194619 3107459 cri.go:89] found id: "fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:49:51.194624 3107459 cri.go:89] found id: ""
	I0923 12:49:51.194631 3107459 logs.go:276] 2 containers: [116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6 fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a]
	I0923 12:49:51.194689 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.198489 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.202024 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:49:51.202095 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:49:51.241619 3107459 cri.go:89] found id: "8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1"
	I0923 12:49:51.241640 3107459 cri.go:89] found id: "5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:49:51.241646 3107459 cri.go:89] found id: ""
	I0923 12:49:51.241653 3107459 logs.go:276] 2 containers: [8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1 5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c]
	I0923 12:49:51.241709 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.245706 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.249324 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:49:51.249435 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:49:51.289211 3107459 cri.go:89] found id: "eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1"
	I0923 12:49:51.289243 3107459 cri.go:89] found id: "3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:49:51.289248 3107459 cri.go:89] found id: ""
	I0923 12:49:51.289256 3107459 logs.go:276] 2 containers: [eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1 3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c]
	I0923 12:49:51.289312 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.293195 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.296613 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:49:51.296685 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:49:51.333977 3107459 cri.go:89] found id: "267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd"
	I0923 12:49:51.334000 3107459 cri.go:89] found id: "96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:49:51.334011 3107459 cri.go:89] found id: ""
	I0923 12:49:51.334018 3107459 logs.go:276] 2 containers: [267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd 96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481]
	I0923 12:49:51.334103 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.338027 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.342155 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:49:51.342230 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:49:51.382904 3107459 cri.go:89] found id: "a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2"
	I0923 12:49:51.382927 3107459 cri.go:89] found id: "03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:49:51.382932 3107459 cri.go:89] found id: ""
	I0923 12:49:51.382939 3107459 logs.go:276] 2 containers: [a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2 03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45]
	I0923 12:49:51.382999 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.386593 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.390052 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:49:51.390120 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:49:51.427871 3107459 cri.go:89] found id: "c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776"
	I0923 12:49:51.427894 3107459 cri.go:89] found id: ""
	I0923 12:49:51.427901 3107459 logs.go:276] 1 containers: [c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776]
	I0923 12:49:51.427960 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.431775 3107459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:49:51.431851 3107459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:49:51.493065 3107459 cri.go:89] found id: "64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6"
	I0923 12:49:51.493100 3107459 cri.go:89] found id: "3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef"
	I0923 12:49:51.493106 3107459 cri.go:89] found id: ""
	I0923 12:49:51.493114 3107459 logs.go:276] 2 containers: [64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6 3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef]
	I0923 12:49:51.493169 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.496969 3107459 ssh_runner.go:195] Run: which crictl
	I0923 12:49:51.500546 3107459 logs.go:123] Gathering logs for coredns [fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a] ...
	I0923 12:49:51.500572 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc42fc8d62cf1e2651d8ec16f744c88d9cc4b1bc29d3b60b39cd11f7143eb57a"
	I0923 12:49:51.542167 3107459 logs.go:123] Gathering logs for kube-proxy [3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c] ...
	I0923 12:49:51.542196 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f9473f71c752b78182e203393ef4108e71b02e2a60351b2a96b5466cabe3b7c"
	I0923 12:49:51.590390 3107459 logs.go:123] Gathering logs for kube-controller-manager [267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd] ...
	I0923 12:49:51.590425 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 267e29d3bc50392009bbb0fcb8347e0d17b44537150adb1804a38eb2164909cd"
	I0923 12:49:51.681232 3107459 logs.go:123] Gathering logs for kube-controller-manager [96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481] ...
	I0923 12:49:51.681267 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96ed3a9bd428b879be50768c0168d22bfda361abc363afe11655ff1c39325481"
	I0923 12:49:51.766915 3107459 logs.go:123] Gathering logs for kindnet [a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2] ...
	I0923 12:49:51.766953 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ef30766d6e29d4113e8052b1409c20fd911dd9a60f8ffe5efea8b6e116fcc2"
	I0923 12:49:51.821438 3107459 logs.go:123] Gathering logs for storage-provisioner [64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6] ...
	I0923 12:49:51.821528 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64d9255be7c627de8ed6da81ca76b6f29de90a0cadaf130c0b36763d33a668f6"
	I0923 12:49:51.885329 3107459 logs.go:123] Gathering logs for kube-apiserver [7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4] ...
	I0923 12:49:51.885362 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e839ac9dd4b4097f7328079de3415ae795f8651729dc528e954bce551d9cfa4"
	I0923 12:49:51.962908 3107459 logs.go:123] Gathering logs for etcd [0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a] ...
	I0923 12:49:51.962997 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0334d040b928229782ff2ee39db6d6f51d609598bcb2d3f2605a9d7f8f48213a"
	I0923 12:49:52.023016 3107459 logs.go:123] Gathering logs for kube-scheduler [8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1] ...
	I0923 12:49:52.023048 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a987e5e7c638c416fbb62da6f6ebcf4fc81c638227d26b48306b7121da05cb1"
	I0923 12:49:52.069963 3107459 logs.go:123] Gathering logs for storage-provisioner [3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef] ...
	I0923 12:49:52.069990 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a75d4eacfb3590fb86f5174b32c74fa5562eccfafe32efb6738f0007cc221ef"
	I0923 12:49:52.109128 3107459 logs.go:123] Gathering logs for container status ...
	I0923 12:49:52.109157 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:49:52.158327 3107459 logs.go:123] Gathering logs for kube-apiserver [9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437] ...
	I0923 12:49:52.158356 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9789f1a1e638a4fbff99a542969d04a8b18fabaa93ab79223416bd07e0bb5437"
	I0923 12:49:52.233782 3107459 logs.go:123] Gathering logs for etcd [177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982] ...
	I0923 12:49:52.233816 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 177fa3486596f97ba5a45b458967a375a7eb68ae063bffb298e224632974d982"
	I0923 12:49:52.287244 3107459 logs.go:123] Gathering logs for kube-scheduler [5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c] ...
	I0923 12:49:52.287273 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cb2c00337000bb7cd8bdff89255363283366b9b3f8117d3f867c57d1f6b384c"
	I0923 12:49:52.340846 3107459 logs.go:123] Gathering logs for kube-proxy [eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1] ...
	I0923 12:49:52.340883 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb5d52b20c6a7e29036d1bc211061b8ffa59d484083ad11f61fd3fee248047f1"
	I0923 12:49:52.383408 3107459 logs.go:123] Gathering logs for kubernetes-dashboard [c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776] ...
	I0923 12:49:52.383433 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c08bad2a807ce5ec7fef9b7e5a64f982639a783e14ab37c238802b6506881776"
	I0923 12:49:52.420433 3107459 logs.go:123] Gathering logs for kubelet ...
	I0923 12:49:52.420464 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:49:52.468033 3107459 logs.go:138] Found kubelet problem: Sep 23 12:45:31 no-preload-569169 kubelet[661]: W0923 12:45:31.106205     661 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-569169" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-569169' and this object
	W0923 12:49:52.468287 3107459 logs.go:138] Found kubelet problem: Sep 23 12:45:31 no-preload-569169 kubelet[661]: E0923 12:45:31.106723     661 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-569169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-569169' and this object" logger="UnhandledError"
	I0923 12:49:52.501087 3107459 logs.go:123] Gathering logs for dmesg ...
	I0923 12:49:52.501131 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:49:52.524116 3107459 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:49:52.524147 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:49:52.678387 3107459 logs.go:123] Gathering logs for coredns [116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6] ...
	I0923 12:49:52.678539 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 116e6961e023a6bf08dc653125f9d94f9a87b445631be992c49c815fd25409f6"
	I0923 12:49:52.754890 3107459 logs.go:123] Gathering logs for kindnet [03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45] ...
	I0923 12:49:52.754919 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03be0861b3b41a85dc2bafc39df39e7351691a862ae53f8f2cf62c860801df45"
	I0923 12:49:52.834155 3107459 logs.go:123] Gathering logs for containerd ...
	I0923 12:49:52.834183 3107459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:49:52.914569 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:52.914603 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:49:52.915416 3107459 out.go:270] X Problems detected in kubelet:
	W0923 12:49:52.915478 3107459 out.go:270]   Sep 23 12:45:31 no-preload-569169 kubelet[661]: W0923 12:45:31.106205     661 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-569169" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-569169' and this object
	W0923 12:49:52.915531 3107459 out.go:270]   Sep 23 12:45:31 no-preload-569169 kubelet[661]: E0923 12:45:31.106723     661 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-569169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-569169' and this object" logger="UnhandledError"
	I0923 12:49:52.915569 3107459 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:52.915603 3107459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:49:52.520527 3102485 pod_ready.go:103] pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace has status "Ready":"False"
	I0923 12:49:52.520556 3102485 pod_ready.go:82] duration metric: took 4m0.009561565s for pod "metrics-server-9975d5f86-kf958" in "kube-system" namespace to be "Ready" ...
	E0923 12:49:52.520568 3102485 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 12:49:52.520575 3102485 pod_ready.go:39] duration metric: took 5m23.27838172s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:49:52.520590 3102485 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:49:52.520620 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:49:52.520680 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:49:52.582116 3102485 cri.go:89] found id: "b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:49:52.582139 3102485 cri.go:89] found id: "2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:49:52.582143 3102485 cri.go:89] found id: ""
	I0923 12:49:52.582150 3102485 logs.go:276] 2 containers: [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85]
	I0923 12:49:52.582208 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.586211 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.590182 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:49:52.590251 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:49:52.643577 3102485 cri.go:89] found id: "ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:49:52.643596 3102485 cri.go:89] found id: "1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:49:52.643601 3102485 cri.go:89] found id: ""
	I0923 12:49:52.643608 3102485 logs.go:276] 2 containers: [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2]
	I0923 12:49:52.643669 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.649826 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.653847 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:49:52.653956 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:49:52.723423 3102485 cri.go:89] found id: "0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:49:52.723449 3102485 cri.go:89] found id: "8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:49:52.723465 3102485 cri.go:89] found id: ""
	I0923 12:49:52.723473 3102485 logs.go:276] 2 containers: [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993]
	I0923 12:49:52.723539 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.728406 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.739893 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:49:52.739971 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:49:52.794862 3102485 cri.go:89] found id: "df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:49:52.794881 3102485 cri.go:89] found id: "2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:49:52.794886 3102485 cri.go:89] found id: ""
	I0923 12:49:52.794893 3102485 logs.go:276] 2 containers: [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef]
	I0923 12:49:52.794947 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.798877 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.802564 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:49:52.802683 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:49:52.882169 3102485 cri.go:89] found id: "3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:49:52.882242 3102485 cri.go:89] found id: "c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:49:52.882262 3102485 cri.go:89] found id: ""
	I0923 12:49:52.882286 3102485 logs.go:276] 2 containers: [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8]
	I0923 12:49:52.882372 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.886656 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.890389 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:49:52.890507 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:49:52.943880 3102485 cri.go:89] found id: "37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:49:52.943913 3102485 cri.go:89] found id: "7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:49:52.943918 3102485 cri.go:89] found id: ""
	I0923 12:49:52.943926 3102485 logs.go:276] 2 containers: [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af]
	I0923 12:49:52.943988 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.947650 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.951076 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:49:52.951161 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:49:52.988808 3102485 cri.go:89] found id: "a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:49:52.988830 3102485 cri.go:89] found id: "7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:49:52.988835 3102485 cri.go:89] found id: ""
	I0923 12:49:52.988848 3102485 logs.go:276] 2 containers: [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f]
	I0923 12:49:52.988909 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.992697 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:52.996136 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:49:52.996206 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:49:53.042311 3102485 cri.go:89] found id: "7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:49:53.042336 3102485 cri.go:89] found id: ""
	I0923 12:49:53.042345 3102485 logs.go:276] 1 containers: [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a]
	I0923 12:49:53.042421 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:53.046105 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:49:53.046208 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:49:53.088283 3102485 cri.go:89] found id: "036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:49:53.088309 3102485 cri.go:89] found id: "0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:49:53.088314 3102485 cri.go:89] found id: ""
	I0923 12:49:53.088321 3102485 logs.go:276] 2 containers: [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649]
	I0923 12:49:53.088398 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:53.092061 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:49:53.095472 3102485 logs.go:123] Gathering logs for kube-controller-manager [7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af] ...
	I0923 12:49:53.095536 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:49:53.152558 3102485 logs.go:123] Gathering logs for kube-apiserver [2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85] ...
	I0923 12:49:53.152592 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:49:53.214318 3102485 logs.go:123] Gathering logs for kube-scheduler [2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef] ...
	I0923 12:49:53.214352 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:49:53.256471 3102485 logs.go:123] Gathering logs for etcd [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a] ...
	I0923 12:49:53.256504 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:49:53.299642 3102485 logs.go:123] Gathering logs for coredns [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb] ...
	I0923 12:49:53.299671 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:49:53.341299 3102485 logs.go:123] Gathering logs for kubernetes-dashboard [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a] ...
	I0923 12:49:53.341331 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:49:53.382087 3102485 logs.go:123] Gathering logs for kubelet ...
	I0923 12:49:53.382166 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:49:53.438281 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989651     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.438539 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989920     658 reflector.go:138] object-"kube-system"/"coredns-token-ljzll": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ljzll" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.438760 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990004     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-267ss": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-267ss" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.438973 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990077     658 reflector.go:138] object-"kube-system"/"kindnet-token-np6vw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-np6vw" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.439173 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990159     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.439407 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990239     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xjwkr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xjwkr" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.439616 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990319     658 reflector.go:138] object-"default"/"default-token-cl5l9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cl5l9" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.441346 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:29 old-k8s-version-605045 kubelet[658]: E0923 12:44:29.145826     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7ztnm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7ztnm" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:49:53.449611 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.630570     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.450439 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.843250     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.453223 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:45 old-k8s-version-605045 kubelet[658]: E0923 12:44:45.501139     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.455318 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:53 old-k8s-version-605045 kubelet[658]: E0923 12:44:53.971600     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.455647 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:54 old-k8s-version-605045 kubelet[658]: E0923 12:44:54.975959     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.455973 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.411779     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.456158 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.466238     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.456946 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:04 old-k8s-version-605045 kubelet[658]: E0923 12:45:04.013052     658 pod_workers.go:191] Error syncing pod bc5f03f4-beaa-4a9b-a052-24d558f95beb ("storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"
	W0923 12:49:53.459838 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:09 old-k8s-version-605045 kubelet[658]: E0923 12:45:09.476299     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.460496 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:11 old-k8s-version-605045 kubelet[658]: E0923 12:45:11.038230     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.460830 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:16 old-k8s-version-605045 kubelet[658]: E0923 12:45:16.410049     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.461150 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:24 old-k8s-version-605045 kubelet[658]: E0923 12:45:24.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.461739 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:32 old-k8s-version-605045 kubelet[658]: E0923 12:45:32.109824     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.462067 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:36 old-k8s-version-605045 kubelet[658]: E0923 12:45:36.410320     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.462252 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:39 old-k8s-version-605045 kubelet[658]: E0923 12:45:39.471779     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.462581 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:51 old-k8s-version-605045 kubelet[658]: E0923 12:45:51.466132     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.465099 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:53 old-k8s-version-605045 kubelet[658]: E0923 12:45:53.479122     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.465442 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:02 old-k8s-version-605045 kubelet[658]: E0923 12:46:02.465540     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.465628 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:07 old-k8s-version-605045 kubelet[658]: E0923 12:46:07.466508     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.466225 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:17 old-k8s-version-605045 kubelet[658]: E0923 12:46:17.299572     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.466410 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:22 old-k8s-version-605045 kubelet[658]: E0923 12:46:22.465904     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.466735 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:26 old-k8s-version-605045 kubelet[658]: E0923 12:46:26.410045     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.466919 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:35 old-k8s-version-605045 kubelet[658]: E0923 12:46:35.469207     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.467245 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:38 old-k8s-version-605045 kubelet[658]: E0923 12:46:38.465574     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.467441 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:47 old-k8s-version-605045 kubelet[658]: E0923 12:46:47.467336     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.467773 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:52 old-k8s-version-605045 kubelet[658]: E0923 12:46:52.465530     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.467957 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:02 old-k8s-version-605045 kubelet[658]: E0923 12:47:02.465921     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.468283 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:07 old-k8s-version-605045 kubelet[658]: E0923 12:47:07.466739     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.471301 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:14 old-k8s-version-605045 kubelet[658]: E0923 12:47:14.473947     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:49:53.471640 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:22 old-k8s-version-605045 kubelet[658]: E0923 12:47:22.465965     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.471829 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:26 old-k8s-version-605045 kubelet[658]: E0923 12:47:26.465940     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.472156 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:33 old-k8s-version-605045 kubelet[658]: E0923 12:47:33.467362     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.472346 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:38 old-k8s-version-605045 kubelet[658]: E0923 12:47:38.465964     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.472930 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:45 old-k8s-version-605045 kubelet[658]: E0923 12:47:45.536822     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.473261 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:46 old-k8s-version-605045 kubelet[658]: E0923 12:47:46.541111     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.473453 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:49 old-k8s-version-605045 kubelet[658]: E0923 12:47:49.465875     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.473780 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:00 old-k8s-version-605045 kubelet[658]: E0923 12:48:00.479190     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.473963 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:02 old-k8s-version-605045 kubelet[658]: E0923 12:48:02.466076     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.474292 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:11 old-k8s-version-605045 kubelet[658]: E0923 12:48:11.465507     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.474477 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:16 old-k8s-version-605045 kubelet[658]: E0923 12:48:16.465938     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.474805 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:22 old-k8s-version-605045 kubelet[658]: E0923 12:48:22.465512     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.474992 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:30 old-k8s-version-605045 kubelet[658]: E0923 12:48:30.465840     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.475329 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:35 old-k8s-version-605045 kubelet[658]: E0923 12:48:35.474238     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.475519 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:44 old-k8s-version-605045 kubelet[658]: E0923 12:48:44.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.475848 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:49 old-k8s-version-605045 kubelet[658]: E0923 12:48:49.466027     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.476033 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:55 old-k8s-version-605045 kubelet[658]: E0923 12:48:55.466326     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.476361 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:04 old-k8s-version-605045 kubelet[658]: E0923 12:49:04.465573     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.476571 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:07 old-k8s-version-605045 kubelet[658]: E0923 12:49:07.466660     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.476901 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:16 old-k8s-version-605045 kubelet[658]: E0923 12:49:16.465577     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.477087 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:20 old-k8s-version-605045 kubelet[658]: E0923 12:49:20.466093     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.477419 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.477605 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.477931 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:53.478116 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:53.478441 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:49:53.478453 3102485 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:49:53.478468 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:49:53.626281 3102485 logs.go:123] Gathering logs for kube-scheduler [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89] ...
	I0923 12:49:53.626312 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:49:53.671782 3102485 logs.go:123] Gathering logs for kube-proxy [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb] ...
	I0923 12:49:53.671827 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:49:53.715920 3102485 logs.go:123] Gathering logs for kube-proxy [c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8] ...
	I0923 12:49:53.715952 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:49:53.773316 3102485 logs.go:123] Gathering logs for kindnet [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6] ...
	I0923 12:49:53.773355 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:49:53.824387 3102485 logs.go:123] Gathering logs for dmesg ...
	I0923 12:49:53.824420 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:49:53.844575 3102485 logs.go:123] Gathering logs for coredns [8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993] ...
	I0923 12:49:53.844609 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:49:53.896670 3102485 logs.go:123] Gathering logs for kube-controller-manager [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd] ...
	I0923 12:49:53.896699 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:49:53.954435 3102485 logs.go:123] Gathering logs for kindnet [7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f] ...
	I0923 12:49:53.954471 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:49:53.996651 3102485 logs.go:123] Gathering logs for storage-provisioner [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19] ...
	I0923 12:49:53.996678 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:49:54.043169 3102485 logs.go:123] Gathering logs for storage-provisioner [0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649] ...
	I0923 12:49:54.043207 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:49:54.091390 3102485 logs.go:123] Gathering logs for containerd ...
	I0923 12:49:54.091423 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:49:54.154040 3102485 logs.go:123] Gathering logs for container status ...
	I0923 12:49:54.154077 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:49:54.203612 3102485 logs.go:123] Gathering logs for kube-apiserver [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb] ...
	I0923 12:49:54.203644 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:49:54.262283 3102485 logs.go:123] Gathering logs for etcd [1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2] ...
	I0923 12:49:54.262317 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:49:54.308962 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:54.308990 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:49:54.309084 3102485 out.go:270] X Problems detected in kubelet:
	W0923 12:49:54.309098 3102485 out.go:270]   Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:54.309128 3102485 out.go:270]   Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:54.309153 3102485 out.go:270]   Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:49:54.309161 3102485 out.go:270]   Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:49:54.309172 3102485 out.go:270]   Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:49:54.309177 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:49:54.309191 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:50:02.922302 3107459 system_pods.go:59] 9 kube-system pods found
	I0923 12:50:02.922338 3107459 system_pods.go:61] "coredns-7c65d6cfc9-r5klx" [442d02d5-c0b6-416c-bc87-b8a2337a1301] Running
	I0923 12:50:02.922345 3107459 system_pods.go:61] "etcd-no-preload-569169" [75f544ac-42d7-489a-bc48-9bdffa18755c] Running
	I0923 12:50:02.922350 3107459 system_pods.go:61] "kindnet-shb4d" [37d396f8-f16a-4ddf-b53f-c437d2ec1c49] Running
	I0923 12:50:02.922354 3107459 system_pods.go:61] "kube-apiserver-no-preload-569169" [091070ef-ccd3-45ea-a976-899c5d833219] Running
	I0923 12:50:02.922360 3107459 system_pods.go:61] "kube-controller-manager-no-preload-569169" [6be673d7-a74b-4dbb-a17b-94c750685abb] Running
	I0923 12:50:02.922363 3107459 system_pods.go:61] "kube-proxy-bcttj" [7aaf66d9-9d3b-4a92-82e0-b3b5dc8e4673] Running
	I0923 12:50:02.922367 3107459 system_pods.go:61] "kube-scheduler-no-preload-569169" [ca379951-c239-4243-97f9-9fc4c75bce3a] Running
	I0923 12:50:02.922374 3107459 system_pods.go:61] "metrics-server-6867b74b74-5twtp" [a9a15ae3-d31c-43f6-be0b-8fb9aa4f196a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:50:02.922386 3107459 system_pods.go:61] "storage-provisioner" [8704a02a-10e0-43cd-842a-4de10f70c535] Running
	I0923 12:50:02.922401 3107459 system_pods.go:74] duration metric: took 11.874767747s to wait for pod list to return data ...
	I0923 12:50:02.922409 3107459 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:50:02.925267 3107459 default_sa.go:45] found service account: "default"
	I0923 12:50:02.925296 3107459 default_sa.go:55] duration metric: took 2.878215ms for default service account to be created ...
	I0923 12:50:02.925306 3107459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:50:02.930935 3107459 system_pods.go:86] 9 kube-system pods found
	I0923 12:50:02.930970 3107459 system_pods.go:89] "coredns-7c65d6cfc9-r5klx" [442d02d5-c0b6-416c-bc87-b8a2337a1301] Running
	I0923 12:50:02.930977 3107459 system_pods.go:89] "etcd-no-preload-569169" [75f544ac-42d7-489a-bc48-9bdffa18755c] Running
	I0923 12:50:02.930985 3107459 system_pods.go:89] "kindnet-shb4d" [37d396f8-f16a-4ddf-b53f-c437d2ec1c49] Running
	I0923 12:50:02.930990 3107459 system_pods.go:89] "kube-apiserver-no-preload-569169" [091070ef-ccd3-45ea-a976-899c5d833219] Running
	I0923 12:50:02.930996 3107459 system_pods.go:89] "kube-controller-manager-no-preload-569169" [6be673d7-a74b-4dbb-a17b-94c750685abb] Running
	I0923 12:50:02.931000 3107459 system_pods.go:89] "kube-proxy-bcttj" [7aaf66d9-9d3b-4a92-82e0-b3b5dc8e4673] Running
	I0923 12:50:02.931004 3107459 system_pods.go:89] "kube-scheduler-no-preload-569169" [ca379951-c239-4243-97f9-9fc4c75bce3a] Running
	I0923 12:50:02.931011 3107459 system_pods.go:89] "metrics-server-6867b74b74-5twtp" [a9a15ae3-d31c-43f6-be0b-8fb9aa4f196a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 12:50:02.931015 3107459 system_pods.go:89] "storage-provisioner" [8704a02a-10e0-43cd-842a-4de10f70c535] Running
	I0923 12:50:02.931025 3107459 system_pods.go:126] duration metric: took 5.713459ms to wait for k8s-apps to be running ...
	I0923 12:50:02.931037 3107459 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:50:02.931097 3107459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:50:02.951124 3107459 system_svc.go:56] duration metric: took 20.076929ms WaitForService to wait for kubelet
	I0923 12:50:02.951220 3107459 kubeadm.go:582] duration metric: took 4m40.978155055s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:50:02.951254 3107459 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:50:02.954625 3107459 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 12:50:02.954705 3107459 node_conditions.go:123] node cpu capacity is 2
	I0923 12:50:02.954736 3107459 node_conditions.go:105] duration metric: took 3.343443ms to run NodePressure ...
	I0923 12:50:02.954778 3107459 start.go:241] waiting for startup goroutines ...
	I0923 12:50:02.954804 3107459 start.go:246] waiting for cluster config update ...
	I0923 12:50:02.954829 3107459 start.go:255] writing updated cluster config ...
	I0923 12:50:02.955198 3107459 ssh_runner.go:195] Run: rm -f paused
	I0923 12:50:03.019577 3107459 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:50:03.021829 3107459 out.go:177] * Done! kubectl is now configured to use "no-preload-569169" cluster and "default" namespace by default
	I0923 12:50:04.310257 3102485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:50:04.324717 3102485 api_server.go:72] duration metric: took 5m54.581827s to wait for apiserver process to appear ...
	I0923 12:50:04.324741 3102485 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:50:04.324779 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0923 12:50:04.324837 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 12:50:04.371861 3102485 cri.go:89] found id: "b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:50:04.371888 3102485 cri.go:89] found id: "2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:50:04.371893 3102485 cri.go:89] found id: ""
	I0923 12:50:04.371900 3102485 logs.go:276] 2 containers: [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85]
	I0923 12:50:04.371957 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.376309 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.380595 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0923 12:50:04.380674 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 12:50:04.423923 3102485 cri.go:89] found id: "ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:50:04.423947 3102485 cri.go:89] found id: "1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:50:04.423952 3102485 cri.go:89] found id: ""
	I0923 12:50:04.423960 3102485 logs.go:276] 2 containers: [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2]
	I0923 12:50:04.424021 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.427792 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.431611 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0923 12:50:04.431688 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 12:50:04.472288 3102485 cri.go:89] found id: "0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:50:04.472310 3102485 cri.go:89] found id: "8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:50:04.472315 3102485 cri.go:89] found id: ""
	I0923 12:50:04.472322 3102485 logs.go:276] 2 containers: [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993]
	I0923 12:50:04.472380 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.476372 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.479861 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0923 12:50:04.479939 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 12:50:04.519484 3102485 cri.go:89] found id: "df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:50:04.519508 3102485 cri.go:89] found id: "2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:50:04.519513 3102485 cri.go:89] found id: ""
	I0923 12:50:04.519521 3102485 logs.go:276] 2 containers: [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef]
	I0923 12:50:04.519579 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.523443 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.527138 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0923 12:50:04.527233 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 12:50:04.571087 3102485 cri.go:89] found id: "3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:50:04.571107 3102485 cri.go:89] found id: "c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:50:04.571112 3102485 cri.go:89] found id: ""
	I0923 12:50:04.571119 3102485 logs.go:276] 2 containers: [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8]
	I0923 12:50:04.571176 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.575120 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.578668 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 12:50:04.578769 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 12:50:04.622151 3102485 cri.go:89] found id: "37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:50:04.622181 3102485 cri.go:89] found id: "7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:50:04.622186 3102485 cri.go:89] found id: ""
	I0923 12:50:04.622194 3102485 logs.go:276] 2 containers: [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af]
	I0923 12:50:04.622263 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.626117 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.629549 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0923 12:50:04.629624 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 12:50:04.669203 3102485 cri.go:89] found id: "a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:50:04.669277 3102485 cri.go:89] found id: "7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:50:04.669291 3102485 cri.go:89] found id: ""
	I0923 12:50:04.669299 3102485 logs.go:276] 2 containers: [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f]
	I0923 12:50:04.669417 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.673133 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.676629 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0923 12:50:04.676723 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0923 12:50:04.718915 3102485 cri.go:89] found id: "7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:50:04.718939 3102485 cri.go:89] found id: ""
	I0923 12:50:04.718947 3102485 logs.go:276] 1 containers: [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a]
	I0923 12:50:04.719031 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.723030 3102485 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0923 12:50:04.723104 3102485 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0923 12:50:04.773144 3102485 cri.go:89] found id: "036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:50:04.773225 3102485 cri.go:89] found id: "0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:50:04.773246 3102485 cri.go:89] found id: ""
	I0923 12:50:04.773280 3102485 logs.go:276] 2 containers: [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649]
	I0923 12:50:04.773393 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.777009 3102485 ssh_runner.go:195] Run: which crictl
	I0923 12:50:04.780384 3102485 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:50:04.780408 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:50:04.928411 3102485 logs.go:123] Gathering logs for etcd [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a] ...
	I0923 12:50:04.928490 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a"
	I0923 12:50:04.972999 3102485 logs.go:123] Gathering logs for kube-scheduler [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89] ...
	I0923 12:50:04.973033 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89"
	I0923 12:50:05.018662 3102485 logs.go:123] Gathering logs for kube-scheduler [2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef] ...
	I0923 12:50:05.018696 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef"
	I0923 12:50:05.073546 3102485 logs.go:123] Gathering logs for kube-controller-manager [7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af] ...
	I0923 12:50:05.073652 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af"
	I0923 12:50:05.135452 3102485 logs.go:123] Gathering logs for kindnet [7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f] ...
	I0923 12:50:05.135491 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f"
	I0923 12:50:05.180530 3102485 logs.go:123] Gathering logs for kubelet ...
	I0923 12:50:05.180558 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:50:05.235647 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989651     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.235906 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.989920     658 reflector.go:138] object-"kube-system"/"coredns-token-ljzll": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ljzll" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236127 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990004     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-267ss": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-267ss" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236340 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990077     658 reflector.go:138] object-"kube-system"/"kindnet-token-np6vw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-np6vw" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236541 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990159     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236769 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990239     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xjwkr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xjwkr" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.236977 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:28 old-k8s-version-605045 kubelet[658]: E0923 12:44:28.990319     658 reflector.go:138] object-"default"/"default-token-cl5l9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cl5l9" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.238682 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:29 old-k8s-version-605045 kubelet[658]: E0923 12:44:29.145826     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7ztnm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7ztnm" is forbidden: User "system:node:old-k8s-version-605045" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-605045' and this object
	W0923 12:50:05.246929 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.630570     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.247743 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:31 old-k8s-version-605045 kubelet[658]: E0923 12:44:31.843250     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.250499 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:45 old-k8s-version-605045 kubelet[658]: E0923 12:44:45.501139     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.252595 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:53 old-k8s-version-605045 kubelet[658]: E0923 12:44:53.971600     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.252923 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:54 old-k8s-version-605045 kubelet[658]: E0923 12:44:54.975959     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.253253 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.411779     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.253439 3102485 logs.go:138] Found kubelet problem: Sep 23 12:44:56 old-k8s-version-605045 kubelet[658]: E0923 12:44:56.466238     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.254205 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:04 old-k8s-version-605045 kubelet[658]: E0923 12:45:04.013052     658 pod_workers.go:191] Error syncing pod bc5f03f4-beaa-4a9b-a052-24d558f95beb ("storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc5f03f4-beaa-4a9b-a052-24d558f95beb)"
	W0923 12:50:05.256971 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:09 old-k8s-version-605045 kubelet[658]: E0923 12:45:09.476299     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.257566 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:11 old-k8s-version-605045 kubelet[658]: E0923 12:45:11.038230     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.257891 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:16 old-k8s-version-605045 kubelet[658]: E0923 12:45:16.410049     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.258204 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:24 old-k8s-version-605045 kubelet[658]: E0923 12:45:24.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.258788 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:32 old-k8s-version-605045 kubelet[658]: E0923 12:45:32.109824     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.259112 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:36 old-k8s-version-605045 kubelet[658]: E0923 12:45:36.410320     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.259325 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:39 old-k8s-version-605045 kubelet[658]: E0923 12:45:39.471779     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.259652 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:51 old-k8s-version-605045 kubelet[658]: E0923 12:45:51.466132     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.262093 3102485 logs.go:138] Found kubelet problem: Sep 23 12:45:53 old-k8s-version-605045 kubelet[658]: E0923 12:45:53.479122     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.262418 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:02 old-k8s-version-605045 kubelet[658]: E0923 12:46:02.465540     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.262604 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:07 old-k8s-version-605045 kubelet[658]: E0923 12:46:07.466508     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.263188 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:17 old-k8s-version-605045 kubelet[658]: E0923 12:46:17.299572     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.263379 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:22 old-k8s-version-605045 kubelet[658]: E0923 12:46:22.465904     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.263705 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:26 old-k8s-version-605045 kubelet[658]: E0923 12:46:26.410045     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.263889 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:35 old-k8s-version-605045 kubelet[658]: E0923 12:46:35.469207     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.264212 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:38 old-k8s-version-605045 kubelet[658]: E0923 12:46:38.465574     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.264396 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:47 old-k8s-version-605045 kubelet[658]: E0923 12:46:47.467336     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.264721 3102485 logs.go:138] Found kubelet problem: Sep 23 12:46:52 old-k8s-version-605045 kubelet[658]: E0923 12:46:52.465530     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.264904 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:02 old-k8s-version-605045 kubelet[658]: E0923 12:47:02.465921     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.265242 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:07 old-k8s-version-605045 kubelet[658]: E0923 12:47:07.466739     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.267689 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:14 old-k8s-version-605045 kubelet[658]: E0923 12:47:14.473947     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.268018 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:22 old-k8s-version-605045 kubelet[658]: E0923 12:47:22.465965     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.268204 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:26 old-k8s-version-605045 kubelet[658]: E0923 12:47:26.465940     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.268528 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:33 old-k8s-version-605045 kubelet[658]: E0923 12:47:33.467362     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.268712 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:38 old-k8s-version-605045 kubelet[658]: E0923 12:47:38.465964     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.269300 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:45 old-k8s-version-605045 kubelet[658]: E0923 12:47:45.536822     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.269629 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:46 old-k8s-version-605045 kubelet[658]: E0923 12:47:46.541111     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.269813 3102485 logs.go:138] Found kubelet problem: Sep 23 12:47:49 old-k8s-version-605045 kubelet[658]: E0923 12:47:49.465875     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.270138 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:00 old-k8s-version-605045 kubelet[658]: E0923 12:48:00.479190     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.270323 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:02 old-k8s-version-605045 kubelet[658]: E0923 12:48:02.466076     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.270652 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:11 old-k8s-version-605045 kubelet[658]: E0923 12:48:11.465507     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.270835 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:16 old-k8s-version-605045 kubelet[658]: E0923 12:48:16.465938     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.271160 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:22 old-k8s-version-605045 kubelet[658]: E0923 12:48:22.465512     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.271360 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:30 old-k8s-version-605045 kubelet[658]: E0923 12:48:30.465840     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.271686 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:35 old-k8s-version-605045 kubelet[658]: E0923 12:48:35.474238     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.271870 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:44 old-k8s-version-605045 kubelet[658]: E0923 12:48:44.465984     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.272196 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:49 old-k8s-version-605045 kubelet[658]: E0923 12:48:49.466027     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.272381 3102485 logs.go:138] Found kubelet problem: Sep 23 12:48:55 old-k8s-version-605045 kubelet[658]: E0923 12:48:55.466326     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.272706 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:04 old-k8s-version-605045 kubelet[658]: E0923 12:49:04.465573     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.272892 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:07 old-k8s-version-605045 kubelet[658]: E0923 12:49:07.466660     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.273218 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:16 old-k8s-version-605045 kubelet[658]: E0923 12:49:16.465577     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.273406 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:20 old-k8s-version-605045 kubelet[658]: E0923 12:49:20.466093     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.273731 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.273915 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.274240 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.274425 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:05.274751 3102485 logs.go:138] Found kubelet problem: Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:05.277200 3102485 logs.go:138] Found kubelet problem: Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486035     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:05.277533 3102485 logs.go:138] Found kubelet problem: Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: E0923 12:50:04.465610     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:50:05.277544 3102485 logs.go:123] Gathering logs for etcd [1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2] ...
	I0923 12:50:05.277558 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2"
	I0923 12:50:05.322579 3102485 logs.go:123] Gathering logs for coredns [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb] ...
	I0923 12:50:05.322658 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb"
	I0923 12:50:05.365537 3102485 logs.go:123] Gathering logs for coredns [8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993] ...
	I0923 12:50:05.365569 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993"
	I0923 12:50:05.407857 3102485 logs.go:123] Gathering logs for kube-proxy [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb] ...
	I0923 12:50:05.407925 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb"
	I0923 12:50:05.456230 3102485 logs.go:123] Gathering logs for kube-proxy [c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8] ...
	I0923 12:50:05.456256 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8"
	I0923 12:50:05.499217 3102485 logs.go:123] Gathering logs for kube-controller-manager [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd] ...
	I0923 12:50:05.499248 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd"
	I0923 12:50:05.563528 3102485 logs.go:123] Gathering logs for kubernetes-dashboard [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a] ...
	I0923 12:50:05.563564 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a"
	I0923 12:50:05.606773 3102485 logs.go:123] Gathering logs for kube-apiserver [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb] ...
	I0923 12:50:05.606803 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb"
	I0923 12:50:05.679239 3102485 logs.go:123] Gathering logs for containerd ...
	I0923 12:50:05.679272 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0923 12:50:05.750829 3102485 logs.go:123] Gathering logs for storage-provisioner [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19] ...
	I0923 12:50:05.750867 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19"
	I0923 12:50:05.792354 3102485 logs.go:123] Gathering logs for storage-provisioner [0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649] ...
	I0923 12:50:05.792381 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649"
	I0923 12:50:05.836837 3102485 logs.go:123] Gathering logs for kindnet [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6] ...
	I0923 12:50:05.836865 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6"
	I0923 12:50:05.905615 3102485 logs.go:123] Gathering logs for kube-apiserver [2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85] ...
	I0923 12:50:05.905647 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85"
	I0923 12:50:05.963389 3102485 logs.go:123] Gathering logs for container status ...
	I0923 12:50:05.963426 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:50:06.020318 3102485 logs.go:123] Gathering logs for dmesg ...
	I0923 12:50:06.020356 3102485 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:50:06.038364 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:50:06.038396 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:50:06.038444 3102485 out.go:270] X Problems detected in kubelet:
	W0923 12:50:06.038457 3102485 out.go:270]   Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:06.038464 3102485 out.go:270]   Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:50:06.038477 3102485 out.go:270]   Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	W0923 12:50:06.038485 3102485 out.go:270]   Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486035     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0923 12:50:06.038496 3102485 out.go:270]   Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: E0923 12:50:04.465610     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	I0923 12:50:06.038518 3102485 out.go:358] Setting ErrFile to fd 2...
	I0923 12:50:06.038531 3102485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:50:16.039609 3102485 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0923 12:50:16.049801 3102485 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0923 12:50:16.052452 3102485 out.go:201] 
	W0923 12:50:16.054574 3102485 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 12:50:16.054647 3102485 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 12:50:16.054671 3102485 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 12:50:16.054677 3102485 out.go:270] * 
	W0923 12:50:16.055931 3102485 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:50:16.058110 3102485 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	4e7193d4b39bc       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   ea38f9c601a25       dashboard-metrics-scraper-8d5bb5db8-7btzn
	036c3b8487f90       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   94921e2d0f194       storage-provisioner
	7f77123376f58       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   2cb18bad0e3bd       kubernetes-dashboard-cd95d586-kbkhr
	0c783388fb537       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   94921e2d0f194       storage-provisioner
	3613dc571cef5       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   f9c415e0b1fc4       kube-proxy-9m2fb
	7acdc9496705b       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   6256fd653a319       busybox
	0b37a02f00d76       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   ba18dbe60e613       coredns-74ff55c5b-pb95z
	a640ded465247       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   a0befa23d3ef1       kindnet-xr6zl
	ec7d4cbc7a416       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   9f9c8588cd840       etcd-old-k8s-version-605045
	37856af16ad99       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   78398563e6151       kube-controller-manager-old-k8s-version-605045
	b2282feb959a3       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   3a88afb9bda02       kube-apiserver-old-k8s-version-605045
	df9e1754aa717       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   a7d93b3641799       kube-scheduler-old-k8s-version-605045
	a2e6c2b12c626       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   5141279d2a106       busybox
	8ff2c93901230       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   a09ce8ae9de9f       coredns-74ff55c5b-pb95z
	7b7c86609a480       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   fc6662601f18a       kindnet-xr6zl
	c012604027179       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   1231436aaa084       kube-proxy-9m2fb
	7c7e0e07a9e06       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   931f6d2318c3b       kube-controller-manager-old-k8s-version-605045
	1d4f50e137bba       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   9de746e03b4df       etcd-old-k8s-version-605045
	2e898a14edc8d       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   6426278f57a57       kube-apiserver-old-k8s-version-605045
	2580add0e36cc       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   77bd7af4febc3       kube-scheduler-old-k8s-version-605045
	
	
	==> containerd <==
	Sep 23 12:46:16 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:16.491036676Z" level=info msg="CreateContainer within sandbox \"ea38f9c601a2569f42859c53e167f013d12cec9624665194571c7952642847cd\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f\""
	Sep 23 12:46:16 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:16.492846038Z" level=info msg="StartContainer for \"d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f\""
	Sep 23 12:46:16 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:16.587954469Z" level=info msg="StartContainer for \"d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f\" returns successfully"
	Sep 23 12:46:16 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:16.619262715Z" level=info msg="shim disconnected" id=d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f namespace=k8s.io
	Sep 23 12:46:16 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:16.619576757Z" level=warning msg="cleaning up after shim disconnected" id=d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f namespace=k8s.io
	Sep 23 12:46:16 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:16.619601815Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 12:46:17 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:17.300789673Z" level=info msg="RemoveContainer for \"4e6c7546d12f897baf7ccb1f48215f38e92d6303a04dff1d47ff2015d594d0e7\""
	Sep 23 12:46:17 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:46:17.307091133Z" level=info msg="RemoveContainer for \"4e6c7546d12f897baf7ccb1f48215f38e92d6303a04dff1d47ff2015d594d0e7\" returns successfully"
	Sep 23 12:47:14 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:14.466251233Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:47:14 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:14.471720271Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 23 12:47:14 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:14.473199123Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 23 12:47:14 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:14.473287550Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.468534470Z" level=info msg="CreateContainer within sandbox \"ea38f9c601a2569f42859c53e167f013d12cec9624665194571c7952642847cd\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.482087983Z" level=info msg="CreateContainer within sandbox \"ea38f9c601a2569f42859c53e167f013d12cec9624665194571c7952642847cd\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe\""
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.482763533Z" level=info msg="StartContainer for \"4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe\""
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.560151798Z" level=info msg="StartContainer for \"4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe\" returns successfully"
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.585039577Z" level=info msg="shim disconnected" id=4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe namespace=k8s.io
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.585094945Z" level=warning msg="cleaning up after shim disconnected" id=4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe namespace=k8s.io
	Sep 23 12:47:44 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:44.585106826Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 23 12:47:45 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:45.538646917Z" level=info msg="RemoveContainer for \"d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f\""
	Sep 23 12:47:45 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:47:45.551903406Z" level=info msg="RemoveContainer for \"d59945de148935dccd5b1b3247f440954d84fe57040abd386b8281702c814c6f\" returns successfully"
	Sep 23 12:50:01 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:50:01.466180697Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:50:01 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:50:01.483658180Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 23 12:50:01 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:50:01.485389930Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 23 12:50:01 old-k8s-version-605045 containerd[565]: time="2024-09-23T12:50:01.485435821Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [0b37a02f00d76de0d1b1e3d0fc7c2ca82878e8c7fdb9ff77b22d675e8e75cdeb] <==
	I0923 12:45:01.721321       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:44:31.720710929 +0000 UTC m=+0.034024264) (total time: 30.000480499s):
	Trace[2019727887]: [30.000480499s] [30.000480499s] END
	E0923 12:45:01.721358       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0923 12:45:01.721595       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:44:31.72128138 +0000 UTC m=+0.034594715) (total time: 30.000299691s):
	Trace[939984059]: [30.000299691s] [30.000299691s] END
	E0923 12:45:01.721602       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0923 12:45:01.721744       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 12:44:31.721551139 +0000 UTC m=+0.034864482) (total time: 30.000181972s):
	Trace[911902081]: [30.000181972s] [30.000181972s] END
	E0923 12:45:01.721749       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49265 - 19444 "HINFO IN 104182449932966012.4262663174497182054. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00457716s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8ff2c93901230ca94db7c0763287c381fd3c5413e52c3e45be28d60a3638b993] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40072 - 44776 "HINFO IN 3327055561030228005.1736002694204985564. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011181988s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-605045
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-605045
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a36553b39c7bbbd910f6bfb97f7b698be94b4e6e
	                    minikube.k8s.io/name=old-k8s-version-605045
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_41_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:41:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-605045
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:50:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:45:19 +0000   Mon, 23 Sep 2024 12:41:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:45:19 +0000   Mon, 23 Sep 2024 12:41:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:45:19 +0000   Mon, 23 Sep 2024 12:41:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:45:19 +0000   Mon, 23 Sep 2024 12:41:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-605045
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc3193edb2ea462bb87210e1f4d90c12
	  System UUID:                21a527bb-7643-4c63-8458-02233d2ecd63
	  Boot ID:                    d8899273-2c3a-49f7-8c9a-66d2209373ba
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 coredns-74ff55c5b-pb95z                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m19s
	  kube-system                 etcd-old-k8s-version-605045                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m25s
	  kube-system                 kindnet-xr6zl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m19s
	  kube-system                 kube-apiserver-old-k8s-version-605045             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-controller-manager-old-k8s-version-605045    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-9m2fb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-scheduler-old-k8s-version-605045             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 metrics-server-9975d5f86-kf958                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m31s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-7btzn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-kbkhr               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m45s (x4 over 8m45s)  kubelet     Node old-k8s-version-605045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m45s (x4 over 8m45s)  kubelet     Node old-k8s-version-605045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s (x4 over 8m45s)  kubelet     Node old-k8s-version-605045 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m26s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m26s                  kubelet     Node old-k8s-version-605045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s                  kubelet     Node old-k8s-version-605045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s                  kubelet     Node old-k8s-version-605045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m19s                  kubelet     Node old-k8s-version-605045 status is now: NodeReady
	  Normal  Starting                 8m17s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m1s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s (x7 over 6m1s)    kubelet     Node old-k8s-version-605045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-605045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-605045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [1d4f50e137bbacd490ef68aa6755f494bdbd13d94cbf4f6b96dfc5b2f75932d2] <==
	raft2024/09/23 12:41:35 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/09/23 12:41:35 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/09/23 12:41:35 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/09/23 12:41:35 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/09/23 12:41:35 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-09-23 12:41:35.008772 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-23 12:41:35.011549 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-23 12:41:35.011758 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-23 12:41:35.011902 I | etcdserver: published {Name:old-k8s-version-605045 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-09-23 12:41:35.011999 I | embed: ready to serve client requests
	2024-09-23 12:41:35.013602 I | embed: serving client requests on 192.168.85.2:2379
	2024-09-23 12:41:35.014012 I | embed: ready to serve client requests
	2024-09-23 12:41:35.015513 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-23 12:41:56.118860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:06.111300 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:16.111243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:26.111265 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:36.118926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:46.111351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:42:56.111337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:06.111383 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:16.111244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:26.111793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:36.111271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:43:46.112760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ec7d4cbc7a416ae0957456b63255d13609ace8374894c9331cf4df0ab18e3a7a] <==
	2024-09-23 12:46:13.602637 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:46:23.602762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:46:33.602524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:46:43.602648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:46:53.602564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:47:03.602541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:47:13.602653 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:47:23.602584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:47:33.602616 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:47:43.602956 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:47:53.602609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:48:03.604461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:48:13.602578 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:48:23.602567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:48:33.602679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:48:43.602673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:48:53.602605 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:49:03.602651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:49:13.602594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:49:23.602667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:49:33.602507 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:49:43.602739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:49:53.602609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:50:03.602643 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:50:13.602740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:50:18 up 1 day, 20:32,  0 users,  load average: 0.71, 1.47, 2.19
	Linux old-k8s-version-605045 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7b7c86609a480c3071f1c1855f3ff241269d00803f81188257286e0cb5ab383f] <==
	I0923 12:42:04.212255       1 controller.go:338] Waiting for informer caches to sync
	I0923 12:42:04.212260       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0923 12:42:04.412826       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0923 12:42:04.412948       1 metrics.go:61] Registering metrics
	I0923 12:42:04.413056       1 controller.go:374] Syncing nftables rules
	I0923 12:42:14.212259       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:42:14.212300       1 main.go:299] handling current node
	I0923 12:42:24.212783       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:42:24.213001       1 main.go:299] handling current node
	I0923 12:42:34.221397       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:42:34.221614       1 main.go:299] handling current node
	I0923 12:42:44.217322       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:42:44.217356       1 main.go:299] handling current node
	I0923 12:42:54.212425       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:42:54.212542       1 main.go:299] handling current node
	I0923 12:43:04.212194       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:43:04.212229       1 main.go:299] handling current node
	I0923 12:43:14.220575       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:43:14.220609       1 main.go:299] handling current node
	I0923 12:43:24.216527       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:43:24.216559       1 main.go:299] handling current node
	I0923 12:43:34.219374       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:43:34.219406       1 main.go:299] handling current node
	I0923 12:43:44.216915       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:43:44.216947       1 main.go:299] handling current node
	
	
	==> kindnet [a640ded4652476a451a288fd103b4cfadff434e6e1a9131a07c1b076942d21e6] <==
	I0923 12:48:11.720189       1 main.go:299] handling current node
	I0923 12:48:21.720767       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:48:21.720808       1 main.go:299] handling current node
	I0923 12:48:31.712517       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:48:31.712554       1 main.go:299] handling current node
	I0923 12:48:41.719413       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:48:41.719453       1 main.go:299] handling current node
	I0923 12:48:51.712505       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:48:51.712536       1 main.go:299] handling current node
	I0923 12:49:01.719375       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:49:01.719412       1 main.go:299] handling current node
	I0923 12:49:11.712425       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:49:11.712482       1 main.go:299] handling current node
	I0923 12:49:21.720783       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:49:21.720817       1 main.go:299] handling current node
	I0923 12:49:31.712288       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:49:31.712443       1 main.go:299] handling current node
	I0923 12:49:41.719390       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:49:41.719425       1 main.go:299] handling current node
	I0923 12:49:51.715343       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:49:51.715440       1 main.go:299] handling current node
	I0923 12:50:01.723444       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:50:01.723478       1 main.go:299] handling current node
	I0923 12:50:11.718861       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0923 12:50:11.718895       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2e898a14edc8d3e5634da2afd4a8564b3d2e7fcdf9bd291c3eb2c0523f0c7f85] <==
	I0923 12:41:41.888619       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0923 12:41:41.888735       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0923 12:41:41.928699       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0923 12:41:41.932700       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0923 12:41:41.932970       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0923 12:41:42.568769       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 12:41:42.610961       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0923 12:41:42.726433       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0923 12:41:42.727714       1 controller.go:606] quota admission added evaluator for: endpoints
	I0923 12:41:42.737518       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 12:41:43.495742       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0923 12:41:44.079185       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0923 12:41:44.192940       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0923 12:41:52.610782       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 12:41:59.649440       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0923 12:41:59.697637       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0923 12:42:16.891843       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:42:16.891890       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:42:16.891898       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:42:49.105747       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:42:49.105797       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:42:49.105829       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:43:30.842599       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:43:30.842658       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:43:30.842667       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [b2282feb959a32bad1209dc9e1469d0a0066fba60bfc2fb4094ae3fc687dfdbb] <==
	I0923 12:46:56.951027       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:46:56.951036       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0923 12:47:31.713905       1 handler_proxy.go:102] no RequestInfo found in the context
	E0923 12:47:31.713979       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 12:47:31.713995       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 12:47:35.096184       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:47:35.096253       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:47:35.096273       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:48:19.575890       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:48:19.576016       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:48:19.576134       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:49:02.655797       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:49:02.655847       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:49:02.655856       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0923 12:49:30.125593       1 handler_proxy.go:102] no RequestInfo found in the context
	E0923 12:49:30.125678       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 12:49:30.125688       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 12:49:36.571321       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:49:36.571364       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:49:36.571373       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:50:12.334864       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:50:12.334910       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:50:12.334918       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [37856af16ad99aeba94fde07e04c227e6c85915d00dafff4a60c1c20b0f562fd] <==
	W0923 12:45:55.068724       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:46:18.714562       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:46:26.719233       1 request.go:655] Throttling request took 1.048030814s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta1?timeout=32s
	W0923 12:46:27.570835       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:46:49.216435       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:46:59.221244       1 request.go:655] Throttling request took 1.048380127s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0923 12:47:00.073249       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:47:19.718539       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:47:31.724114       1 request.go:655] Throttling request took 1.047950703s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0923 12:47:32.575865       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:47:50.220346       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:48:04.226273       1 request.go:655] Throttling request took 1.0481854s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 12:48:05.077999       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:48:20.792981       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:48:36.728556       1 request.go:655] Throttling request took 1.048328963s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 12:48:37.581488       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:48:51.295374       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:49:09.231941       1 request.go:655] Throttling request took 1.048429891s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0923 12:49:10.085222       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:49:21.848753       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:49:41.735670       1 request.go:655] Throttling request took 1.048192074s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 12:49:42.587176       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:49:52.351704       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:50:14.237724       1 request.go:655] Throttling request took 1.048466475s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0923 12:50:15.089494       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [7c7e0e07a9e066b767ae758ebfecb8074648180f820d3e2d75ae49339a7553af] <==
	I0923 12:41:59.654223       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 12:41:59.655894       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 12:41:59.661426       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0923 12:41:59.671491       1 shared_informer.go:247] Caches are synced for HPA 
	I0923 12:41:59.679636       1 shared_informer.go:247] Caches are synced for attach detach 
	I0923 12:41:59.691804       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0923 12:41:59.704411       1 shared_informer.go:247] Caches are synced for disruption 
	I0923 12:41:59.704435       1 disruption.go:339] Sending events to api server.
	I0923 12:41:59.706542       1 shared_informer.go:247] Caches are synced for endpoint 
	I0923 12:41:59.707586       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0923 12:41:59.715881       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0923 12:41:59.717590       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9m2fb"
	I0923 12:41:59.717617       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xr6zl"
	I0923 12:41:59.737549       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-kz5zm"
	E0923 12:41:59.813845       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"eec9b228-a151-4529-98e4-04791392653d", ResourceVersion:"271", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862692104, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d73b20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d73b40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001d73b60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73b80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73ba0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73bc0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d73be0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d73c20)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001ceb5c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001d8ab48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001fb730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40005b0190)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001d8ab90)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0923 12:41:59.814714       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pb95z"
	I0923 12:41:59.822501       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0923 12:42:00.123470       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 12:42:00.141242       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 12:42:00.141267       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 12:42:00.983979       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0923 12:42:00.998936       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-kz5zm"
	I0923 12:43:46.153207       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0923 12:43:46.247420       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0923 12:43:46.278947       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [3613dc571cef58ead6749eb2383e0e79ed8c48eb15efb4bba28cf918e2dd3efb] <==
	I0923 12:44:33.141066       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0923 12:44:33.141617       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0923 12:44:33.167328       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 12:44:33.167616       1 server_others.go:185] Using iptables Proxier.
	I0923 12:44:33.168266       1 server.go:650] Version: v1.20.0
	I0923 12:44:33.169949       1 config.go:224] Starting endpoint slice config controller
	I0923 12:44:33.170110       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 12:44:33.171048       1 config.go:315] Starting service config controller
	I0923 12:44:33.171206       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 12:44:33.270384       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0923 12:44:33.271538       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [c01260402717934e0fda25d9ba74c52f11b9fe853dc5b4c12f80f8799ca4f2a8] <==
	I0923 12:42:01.298737       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0923 12:42:01.299032       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0923 12:42:01.318244       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 12:42:01.318585       1 server_others.go:185] Using iptables Proxier.
	I0923 12:42:01.319008       1 server.go:650] Version: v1.20.0
	I0923 12:42:01.319734       1 config.go:315] Starting service config controller
	I0923 12:42:01.319882       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 12:42:01.321011       1 config.go:224] Starting endpoint slice config controller
	I0923 12:42:01.321166       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 12:42:01.422004       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0923 12:42:01.422047       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [2580add0e36cca9407c4890d5578a3ecf589374f987138526bd939a45122e8ef] <==
	W0923 12:41:41.067431       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 12:41:41.067529       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 12:41:41.067558       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 12:41:41.068028       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 12:41:41.153548       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:41:41.153697       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:41:41.154501       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0923 12:41:41.153487       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0923 12:41:41.185116       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:41:41.185451       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 12:41:41.185685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 12:41:41.186065       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 12:41:41.186332       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:41:41.186393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 12:41:41.186443       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 12:41:41.186508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 12:41:41.186562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:41:41.186616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:41:41.186664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 12:41:41.212943       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:41:42.049998       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:41:42.074388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 12:41:42.210271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:41:42.256570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0923 12:41:44.553946       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [df9e1754aa7176c9aa5049e2d7574a5dfd7dd21b02f14db6d9f5a24905f3cd89] <==
	I0923 12:44:24.980691       1 serving.go:331] Generated self-signed cert in-memory
	W0923 12:44:29.046070       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 12:44:29.046102       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 12:44:29.046120       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 12:44:29.046126       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 12:44:29.307207       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:44:29.307243       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:44:29.312676       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0923 12:44:29.312764       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0923 12:44:29.409861       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 23 12:48:49 old-k8s-version-605045 kubelet[658]: E0923 12:48:49.466027     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:48:55 old-k8s-version-605045 kubelet[658]: E0923 12:48:55.466326     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:49:04 old-k8s-version-605045 kubelet[658]: I0923 12:49:04.465204     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:49:04 old-k8s-version-605045 kubelet[658]: E0923 12:49:04.465573     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:49:07 old-k8s-version-605045 kubelet[658]: E0923 12:49:07.466660     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:49:16 old-k8s-version-605045 kubelet[658]: I0923 12:49:16.465209     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:49:16 old-k8s-version-605045 kubelet[658]: E0923 12:49:16.465577     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:49:20 old-k8s-version-605045 kubelet[658]: E0923 12:49:20.466093     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: I0923 12:49:29.465178     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:49:29 old-k8s-version-605045 kubelet[658]: E0923 12:49:29.465520     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:49:32 old-k8s-version-605045 kubelet[658]: E0923 12:49:32.465885     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: I0923 12:49:41.465312     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:49:41 old-k8s-version-605045 kubelet[658]: E0923 12:49:41.466162     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:49:47 old-k8s-version-605045 kubelet[658]: E0923 12:49:47.469587     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: I0923 12:49:52.465139     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:49:52 old-k8s-version-605045 kubelet[658]: E0923 12:49:52.465952     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.485787     658 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.485859     658 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486001     658 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-7ztnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-kf958_kube-system(c86555f
f-2cba-4287-abf0-8093aab3c29a): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Sep 23 12:50:01 old-k8s-version-605045 kubelet[658]: E0923 12:50:01.486035     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: I0923 12:50:04.465207     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:50:04 old-k8s-version-605045 kubelet[658]: E0923 12:50:04.465610     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	Sep 23 12:50:13 old-k8s-version-605045 kubelet[658]: E0923 12:50:13.466331     658 pod_workers.go:191] Error syncing pod c86555ff-2cba-4287-abf0-8093aab3c29a ("metrics-server-9975d5f86-kf958_kube-system(c86555ff-2cba-4287-abf0-8093aab3c29a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:50:18 old-k8s-version-605045 kubelet[658]: I0923 12:50:18.465180     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4e7193d4b39bc95a722acd825f1ddd864fbef394ccf127e0f216622d16d908fe
	Sep 23 12:50:18 old-k8s-version-605045 kubelet[658]: E0923 12:50:18.465597     658 pod_workers.go:191] Error syncing pod 084a43ab-5137-4d1a-a05b-0a7ed25eae52 ("dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7btzn_kubernetes-dashboard(084a43ab-5137-4d1a-a05b-0a7ed25eae52)"
	
	
	==> kubernetes-dashboard [7f77123376f589c236b87db48d11c0941e9837cae0912c199ba429f76e31629a] <==
	2024/09/23 12:44:56 Using namespace: kubernetes-dashboard
	2024/09/23 12:44:56 Using in-cluster config to connect to apiserver
	2024/09/23 12:44:56 Using secret token for csrf signing
	2024/09/23 12:44:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/23 12:44:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/23 12:44:56 Successful initial request to the apiserver, version: v1.20.0
	2024/09/23 12:44:56 Generating JWE encryption key
	2024/09/23 12:44:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/23 12:44:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/23 12:44:56 Initializing JWE encryption key from synchronized object
	2024/09/23 12:44:56 Creating in-cluster Sidecar client
	2024/09/23 12:44:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:44:57 Serving insecurely on HTTP port: 9090
	2024/09/23 12:45:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:45:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:46:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:46:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:47:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:47:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:48:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:48:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:49:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:49:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:44:56 Starting overwatch
	
	
	==> storage-provisioner [036c3b8487f9041f4dfadb532e6ec45639c4d42f55a3699697f42f3e22679e19] <==
	I0923 12:45:17.656246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 12:45:17.675896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 12:45:17.675989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 12:45:35.201884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 12:45:35.205379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-605045_2727ca76-e3d8-4a29-89d6-75da74e93141!
	I0923 12:45:35.206648       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea175063-3427-4c47-b310-1c3ee60b6be2", APIVersion:"v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-605045_2727ca76-e3d8-4a29-89d6-75da74e93141 became leader
	I0923 12:45:35.319763       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-605045_2727ca76-e3d8-4a29-89d6-75da74e93141!
	
	
	==> storage-provisioner [0c783388fb5375603ae87338a5bc59d00ab3b4a8029fe8ef41c7f1d31e3b1649] <==
	I0923 12:44:33.205650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 12:45:03.207873       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-605045 -n old-k8s-version-605045
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-605045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-kf958
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-605045 describe pod metrics-server-9975d5f86-kf958
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-605045 describe pod metrics-server-9975d5f86-kf958: exit status 1 (139.899329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-kf958" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-605045 describe pod metrics-server-9975d5f86-kf958: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (379.77s)

                                                
                                    

Test pass (292/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.89
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 422.68
35 TestAddons/parallel/InspektorGadget 11.87
36 TestAddons/parallel/MetricsServer 5.82
40 TestAddons/parallel/CloudSpanner 6.6
42 TestAddons/parallel/NvidiaDevicePlugin 5.56
43 TestAddons/parallel/Yakd 10.83
44 TestAddons/StoppedEnableDisable 6.38
45 TestCertOptions 32.46
46 TestCertExpiration 225.96
48 TestForceSystemdFlag 40.39
49 TestForceSystemdEnv 42.38
50 TestDockerEnvContainerd 44.39
55 TestErrorSpam/setup 28.7
56 TestErrorSpam/start 0.74
57 TestErrorSpam/status 1.04
58 TestErrorSpam/pause 1.87
59 TestErrorSpam/unpause 2.35
60 TestErrorSpam/stop 1.49
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 88.7
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 5.99
67 TestFunctional/serial/KubeContext 0.08
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.15
72 TestFunctional/serial/CacheCmd/cache/add_local 1.31
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 39.62
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.74
83 TestFunctional/serial/LogsFileCmd 1.82
84 TestFunctional/serial/InvalidService 4.81
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 9.39
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 11.66
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 24.21
98 TestFunctional/parallel/SSHCmd 0.83
99 TestFunctional/parallel/CpCmd 1.97
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 2.19
106 TestFunctional/parallel/NodeLabels 0.09
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
110 TestFunctional/parallel/License 0.32
111 TestFunctional/parallel/Version/short 0.08
112 TestFunctional/parallel/Version/components 1.3
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.87
118 TestFunctional/parallel/ImageCommands/Setup 0.85
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.85
126 TestFunctional/parallel/ProfileCmd/profile_list 0.48
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.6
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
144 TestFunctional/parallel/MountCmd/any-port 7.84
145 TestFunctional/parallel/ServiceCmd/List 0.58
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
148 TestFunctional/parallel/ServiceCmd/Format 0.42
149 TestFunctional/parallel/ServiceCmd/URL 0.4
150 TestFunctional/parallel/MountCmd/specific-port 1.75
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.15
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.03
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 120.01
159 TestMultiControlPlane/serial/DeployApp 32.17
160 TestMultiControlPlane/serial/PingHostFromPods 1.55
161 TestMultiControlPlane/serial/AddWorkerNode 24.61
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
164 TestMultiControlPlane/serial/CopyFile 19.34
165 TestMultiControlPlane/serial/StopSecondaryNode 12.96
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
167 TestMultiControlPlane/serial/RestartSecondaryNode 32.21
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 135.59
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.79
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
172 TestMultiControlPlane/serial/StopCluster 36.04
173 TestMultiControlPlane/serial/RestartCluster 43.25
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
175 TestMultiControlPlane/serial/AddSecondaryNode 44.29
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
180 TestJSONOutput/start/Command 47.7
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.68
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.73
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 39.62
206 TestKicCustomNetwork/use_default_bridge_network 39.27
207 TestKicExistingNetwork 34.65
208 TestKicCustomSubnet 32.43
209 TestKicStaticIP 33.84
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 67.58
214 TestMountStart/serial/StartWithMountFirst 6.57
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 9.11
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.6
219 TestMountStart/serial/VerifyMountPostDelete 0.26
220 TestMountStart/serial/Stop 1.37
221 TestMountStart/serial/RestartStopped 7.21
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 66.27
226 TestMultiNode/serial/DeployApp2Nodes 19.39
227 TestMultiNode/serial/PingHostFrom2Pods 1
228 TestMultiNode/serial/AddNode 18.5
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.67
231 TestMultiNode/serial/CopyFile 10
232 TestMultiNode/serial/StopNode 2.27
233 TestMultiNode/serial/StartAfterStop 10.22
234 TestMultiNode/serial/RestartKeepsNodes 99.4
235 TestMultiNode/serial/DeleteNode 5.53
236 TestMultiNode/serial/StopMultiNode 23.99
237 TestMultiNode/serial/RestartMultiNode 52.56
238 TestMultiNode/serial/ValidateNameConflict 31.17
243 TestPreload 117.74
245 TestScheduledStopUnix 107.23
248 TestInsufficientStorage 10.75
249 TestRunningBinaryUpgrade 82.95
251 TestKubernetesUpgrade 104.19
252 TestMissingContainerUpgrade 186.71
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 39.93
256 TestNoKubernetes/serial/StartWithStopK8s 20.38
257 TestNoKubernetes/serial/Start 9.34
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
259 TestNoKubernetes/serial/ProfileList 0.95
260 TestNoKubernetes/serial/Stop 1.19
261 TestNoKubernetes/serial/StartNoArgs 7.07
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
263 TestStoppedBinaryUpgrade/Setup 0.73
264 TestStoppedBinaryUpgrade/Upgrade 142
273 TestPause/serial/Start 69.76
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
275 TestPause/serial/SecondStartNoReconfiguration 7.03
276 TestPause/serial/Pause 0.95
277 TestPause/serial/VerifyStatus 0.38
278 TestPause/serial/Unpause 0.92
279 TestPause/serial/PauseAgain 1.03
280 TestPause/serial/DeletePaused 2.85
281 TestPause/serial/VerifyDeletedResources 0.46
289 TestNetworkPlugins/group/false 5.5
294 TestStartStop/group/old-k8s-version/serial/FirstStart 153.88
296 TestStartStop/group/no-preload/serial/FirstStart 76.75
297 TestStartStop/group/old-k8s-version/serial/DeployApp 10.88
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
299 TestStartStop/group/old-k8s-version/serial/Stop 14.1
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
302 TestStartStop/group/no-preload/serial/DeployApp 9.88
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
304 TestStartStop/group/no-preload/serial/Stop 12.1
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
306 TestStartStop/group/no-preload/serial/SecondStart 289.18
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
310 TestStartStop/group/no-preload/serial/Pause 3.88
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/embed-certs/serial/FirstStart 66.24
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
316 TestStartStop/group/old-k8s-version/serial/Pause 3.69
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.43
319 TestStartStop/group/embed-certs/serial/DeployApp 10.34
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
321 TestStartStop/group/embed-certs/serial/Stop 12.14
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/embed-certs/serial/SecondStart 267.53
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 304.34
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/embed-certs/serial/Pause 3.12
334 TestStartStop/group/newest-cni/serial/FirstStart 36.99
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
337 TestStartStop/group/newest-cni/serial/Stop 1.25
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 17.09
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
343 TestStartStop/group/newest-cni/serial/Pause 2.99
344 TestNetworkPlugins/group/auto/Start 61.62
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.31
349 TestNetworkPlugins/group/kindnet/Start 63.17
350 TestNetworkPlugins/group/auto/KubeletFlags 0.49
351 TestNetworkPlugins/group/auto/NetCatPod 10.4
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
357 TestNetworkPlugins/group/kindnet/NetCatPod 10.37
358 TestNetworkPlugins/group/calico/Start 73.58
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.18
361 TestNetworkPlugins/group/kindnet/HairPin 0.21
362 TestNetworkPlugins/group/custom-flannel/Start 52.43
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.31
365 TestNetworkPlugins/group/calico/NetCatPod 9.26
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
368 TestNetworkPlugins/group/calico/DNS 0.26
369 TestNetworkPlugins/group/calico/Localhost 0.25
370 TestNetworkPlugins/group/calico/HairPin 0.24
371 TestNetworkPlugins/group/custom-flannel/DNS 0.28
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
374 TestNetworkPlugins/group/enable-default-cni/Start 49.64
375 TestNetworkPlugins/group/flannel/Start 54.1
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
383 TestNetworkPlugins/group/flannel/NetCatPod 11.37
384 TestNetworkPlugins/group/flannel/DNS 0.24
385 TestNetworkPlugins/group/flannel/Localhost 0.23
386 TestNetworkPlugins/group/bridge/Start 81.15
387 TestNetworkPlugins/group/flannel/HairPin 0.23
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
389 TestNetworkPlugins/group/bridge/NetCatPod 10.24
390 TestNetworkPlugins/group/bridge/DNS 0.18
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-611017 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-611017 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.894974264s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 11:35:40.428255 2903158 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0923 11:35:40.428349 2903158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-611017
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-611017: exit status 85 (68.577505ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-611017 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |          |
	|         | -p download-only-611017        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:33.581810 2903163 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:33.582021 2903163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:33.582049 2903163 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:33.582068 2903163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:33.582354 2903163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	W0923 11:35:33.582547 2903163 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19688-2897765/.minikube/config/config.json: open /home/jenkins/minikube-integration/19688-2897765/.minikube/config/config.json: no such file or directory
	I0923 11:35:33.583028 2903163 out.go:352] Setting JSON to true
	I0923 11:35:33.583990 2903163 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155881,"bootTime":1726935453,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:33.584090 2903163 start.go:139] virtualization:  
	I0923 11:35:33.587115 2903163 out.go:97] [download-only-611017] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0923 11:35:33.587290 2903163 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:35:33.587347 2903163 notify.go:220] Checking for updates...
	I0923 11:35:33.589050 2903163 out.go:169] MINIKUBE_LOCATION=19688
	I0923 11:35:33.590783 2903163 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:33.592649 2903163 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:33.594372 2903163 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:33.596140 2903163 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 11:35:33.599554 2903163 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:35:33.599813 2903163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:33.625013 2903163 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:33.625121 2903163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:33.693277 2903163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 11:35:33.683864497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:33.693400 2903163 docker.go:318] overlay module found
	I0923 11:35:33.695407 2903163 out.go:97] Using the docker driver based on user configuration
	I0923 11:35:33.695443 2903163 start.go:297] selected driver: docker
	I0923 11:35:33.695451 2903163 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:33.695553 2903163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:33.740250 2903163 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 11:35:33.731022531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:33.740475 2903163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:33.740777 2903163 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 11:35:33.740933 2903163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:35:33.742952 2903163 out.go:169] Using Docker driver with root privileges
	I0923 11:35:33.744825 2903163 cni.go:84] Creating CNI manager for ""
	I0923 11:35:33.744900 2903163 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:33.744916 2903163 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:33.745001 2903163 start.go:340] cluster config:
	{Name:download-only-611017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-611017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:33.746998 2903163 out.go:97] Starting "download-only-611017" primary control-plane node in "download-only-611017" cluster
	I0923 11:35:33.747040 2903163 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:33.748686 2903163 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:33.748716 2903163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 11:35:33.748882 2903163 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:33.768277 2903163 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:33.769200 2903163 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:33.769310 2903163 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:33.812338 2903163 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:33.812377 2903163 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:33.812565 2903163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0923 11:35:33.814815 2903163 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 11:35:33.814844 2903163 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0923 11:35:33.899571 2903163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:38.485669 2903163 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	
	
	* The control-plane node download-only-611017 host does not exist
	  To start a cluster, run: "minikube start -p download-only-611017"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-611017
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-423730 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-423730 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.885881012s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 11:35:47.715623 2903158 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0923 11:35:47.715671 2903158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-423730
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-423730: exit status 85 (70.762808ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-611017 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-611017        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| delete  | -p download-only-611017        | download-only-611017 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC | 23 Sep 24 11:35 UTC |
	| start   | -o=json --download-only        | download-only-423730 | jenkins | v1.34.0 | 23 Sep 24 11:35 UTC |                     |
	|         | -p download-only-423730        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:35:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:35:40.873745 2903363 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:35:40.873951 2903363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:40.873982 2903363 out.go:358] Setting ErrFile to fd 2...
	I0923 11:35:40.874003 2903363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:35:40.874281 2903363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 11:35:40.874743 2903363 out.go:352] Setting JSON to true
	I0923 11:35:40.875697 2903363 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":155888,"bootTime":1726935453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 11:35:40.875809 2903363 start.go:139] virtualization:  
	I0923 11:35:40.877939 2903363 out.go:97] [download-only-423730] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 11:35:40.878229 2903363 notify.go:220] Checking for updates...
	I0923 11:35:40.880886 2903363 out.go:169] MINIKUBE_LOCATION=19688
	I0923 11:35:40.882721 2903363 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:35:40.884579 2903363 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 11:35:40.886325 2903363 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 11:35:40.887915 2903363 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 11:35:40.890871 2903363 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:35:40.891177 2903363 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:35:40.923109 2903363 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:35:40.923221 2903363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:40.971722 2903363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 11:35:40.961773314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:40.971835 2903363 docker.go:318] overlay module found
	I0923 11:35:40.973745 2903363 out.go:97] Using the docker driver based on user configuration
	I0923 11:35:40.973769 2903363 start.go:297] selected driver: docker
	I0923 11:35:40.973775 2903363 start.go:901] validating driver "docker" against <nil>
	I0923 11:35:40.973875 2903363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:35:41.036816 2903363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 11:35:41.011792348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 11:35:41.037031 2903363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:35:41.037353 2903363 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 11:35:41.037529 2903363 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:35:41.039271 2903363 out.go:169] Using Docker driver with root privileges
	I0923 11:35:41.041069 2903363 cni.go:84] Creating CNI manager for ""
	I0923 11:35:41.041140 2903363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0923 11:35:41.041154 2903363 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 11:35:41.041253 2903363 start.go:340] cluster config:
	{Name:download-only-423730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-423730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:35:41.043082 2903363 out.go:97] Starting "download-only-423730" primary control-plane node in "download-only-423730" cluster
	I0923 11:35:41.043099 2903363 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0923 11:35:41.044724 2903363 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:35:41.044747 2903363 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:41.044924 2903363 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:35:41.060956 2903363 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 11:35:41.061123 2903363 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 11:35:41.061149 2903363 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 11:35:41.061157 2903363 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 11:35:41.061165 2903363 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 11:35:41.097583 2903363 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0923 11:35:41.097608 2903363 cache.go:56] Caching tarball of preloaded images
	I0923 11:35:41.097767 2903363 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0923 11:35:41.100239 2903363 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 11:35:41.100268 2903363 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0923 11:35:41.183183 2903363 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19688-2897765/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-423730 host does not exist
	  To start a cluster, run: "minikube start -p download-only-423730"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-423730
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 11:35:48.948976 2903158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-046209 --alsologtostderr --binary-mirror http://127.0.0.1:34157 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-046209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-046209
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-348379
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-348379: exit status 85 (76.855605ms)

                                                
                                                
-- stdout --
	* Profile "addons-348379" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-348379"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-348379
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-348379: exit status 85 (90.620472ms)

                                                
                                                
-- stdout --
	* Profile "addons-348379" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-348379"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (422.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-348379 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-348379 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (7m2.670993276s)
--- PASS: TestAddons/Setup (422.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xl5qc" [41dc3424-c623-4804-93de-616b2916d6ed] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003884853s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-348379
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-348379: (5.863393975s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.495103ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-dgpbq" [7eafdf5f-4ae3-46ac-a5af-965235e8c031] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00388868s
addons_test.go:413: (dbg) Run:  kubectl --context addons-348379 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-lbht7" [6bd20673-32b3-43aa-abb9-3444ff9e3dd7] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003981631s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-348379
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xqqn9" [ee088a28-253c-4cbb-bd5e-e8798378f50c] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003774386s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-348379
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j4xcr" [933d0719-bb66-4c27-9808-eb14855569f8] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005240208s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-348379 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-348379 addons disable yakd --alsologtostderr -v=1: (5.82237441s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-348379
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-348379: (6.097226458s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-348379
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-348379
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-348379
--- PASS: TestAddons/StoppedEnableDisable (6.38s)

                                                
                                    
x
+
TestCertOptions (32.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-007095 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0923 12:40:55.378339 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-007095 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (29.414705884s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-007095 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-007095 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-007095 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-007095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-007095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-007095: (2.335584697s)
--- PASS: TestCertOptions (32.46s)

                                                
                                    
x
+
TestCertExpiration (225.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-130595 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-130595 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.238971912s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-130595 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-130595 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.360222741s)
helpers_test.go:175: Cleaning up "cert-expiration-130595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-130595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-130595: (2.355153036s)
--- PASS: TestCertExpiration (225.96s)

                                                
                                    
x
+
TestForceSystemdFlag (40.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-410328 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-410328 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.661403924s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-410328 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-410328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-410328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-410328: (2.358434754s)
--- PASS: TestForceSystemdFlag (40.39s)

                                                
                                    
x
+
TestForceSystemdEnv (42.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-461049 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-461049 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.806833489s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-461049 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-461049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-461049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-461049: (2.195352507s)
--- PASS: TestForceSystemdEnv (42.38s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.39s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-414221 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-414221 --driver=docker  --container-runtime=containerd: (28.891874641s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-414221"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lqiI6iCqByyN/agent.2919549" SSH_AGENT_PID="2919550" DOCKER_HOST=ssh://docker@127.0.0.1:41797 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lqiI6iCqByyN/agent.2919549" SSH_AGENT_PID="2919550" DOCKER_HOST=ssh://docker@127.0.0.1:41797 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lqiI6iCqByyN/agent.2919549" SSH_AGENT_PID="2919550" DOCKER_HOST=ssh://docker@127.0.0.1:41797 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.13875007s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lqiI6iCqByyN/agent.2919549" SSH_AGENT_PID="2919550" DOCKER_HOST=ssh://docker@127.0.0.1:41797 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-414221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-414221
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-414221: (1.955538046s)
--- PASS: TestDockerEnvContainerd (44.39s)

                                                
                                    
x
+
TestErrorSpam/setup (28.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-181775 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-181775 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-181775 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-181775 --driver=docker  --container-runtime=containerd: (28.698512487s)
--- PASS: TestErrorSpam/setup (28.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 unpause: (1.07310703s)
--- PASS: TestErrorSpam/unpause (2.35s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 stop: (1.300126361s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-181775 --log_dir /tmp/nospam-181775 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19688-2897765/.minikube/files/etc/test/nested/copy/2903158/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-100969 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-100969 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m28.695192759s)
--- PASS: TestFunctional/serial/StartWithProxy (88.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 12:07:36.259983 2903158 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-100969 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-100969 --alsologtostderr -v=8: (5.991405503s)
functional_test.go:663: soft start took 5.991965376s for "functional-100969" cluster.
I0923 12:07:42.251691 2903158 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-100969 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 cache add registry.k8s.io/pause:3.1: (1.442141289s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 cache add registry.k8s.io/pause:3.3: (1.529812168s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 cache add registry.k8s.io/pause:latest: (1.180995418s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-100969 /tmp/TestFunctionalserialCacheCmdcacheadd_local404612442/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cache add minikube-local-cache-test:functional-100969
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cache delete minikube-local-cache-test:functional-100969
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-100969
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.400452ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 cache reload: (1.082307384s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 kubectl -- --context functional-100969 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-100969 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-100969 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 12:07:52.304701 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.311248 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.322691 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.344078 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.385445 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.466915 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.628437 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:52.950167 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:53.591999 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:54.873295 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:07:57.434932 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:08:02.556338 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:08:12.797952 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-100969 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.620584834s)
functional_test.go:761: restart took 39.620694519s for "functional-100969" cluster.
I0923 12:08:30.379574 2903158 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-100969 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 logs: (1.735509469s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 logs --file /tmp/TestFunctionalserialLogsFileCmd1607362241/001/logs.txt
E0923 12:08:33.279577 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 logs --file /tmp/TestFunctionalserialLogsFileCmd1607362241/001/logs.txt: (1.816689096s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-100969 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-100969
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-100969: exit status 115 (681.361229ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32621 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-100969 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 config get cpus: exit status 14 (64.14827ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 config get cpus: exit status 14 (68.398244ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-100969 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-100969 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2935918: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-100969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-100969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (183.303733ms)

                                                
                                                
-- stdout --
	* [functional-100969] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:09:12.565265 2934529 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:09:12.565468 2934529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:09:12.565494 2934529 out.go:358] Setting ErrFile to fd 2...
	I0923 12:09:12.565516 2934529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:09:12.565842 2934529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:09:12.566304 2934529 out.go:352] Setting JSON to false
	I0923 12:09:12.567451 2934529 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":157900,"bootTime":1726935453,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 12:09:12.567559 2934529 start.go:139] virtualization:  
	I0923 12:09:12.570845 2934529 out.go:177] * [functional-100969] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 12:09:12.573574 2934529 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 12:09:12.573619 2934529 notify.go:220] Checking for updates...
	I0923 12:09:12.575756 2934529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:09:12.580155 2934529 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:09:12.582370 2934529 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 12:09:12.584502 2934529 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 12:09:12.586352 2934529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:09:12.589111 2934529 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:09:12.589792 2934529 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:09:12.619468 2934529 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 12:09:12.619688 2934529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:09:12.678022 2934529 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 12:09:12.668080588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:09:12.678137 2934529 docker.go:318] overlay module found
	I0923 12:09:12.681072 2934529 out.go:177] * Using the docker driver based on existing profile
	I0923 12:09:12.683273 2934529 start.go:297] selected driver: docker
	I0923 12:09:12.683372 2934529 start.go:901] validating driver "docker" against &{Name:functional-100969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-100969 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:09:12.683527 2934529 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:09:12.686455 2934529 out.go:201] 
	W0923 12:09:12.688755 2934529 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 12:09:12.691169 2934529 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-100969 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-100969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-100969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (181.966883ms)

                                                
                                                
-- stdout --
	* [functional-100969] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:09:17.804690 2935695 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:09:17.804834 2935695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:09:17.804844 2935695 out.go:358] Setting ErrFile to fd 2...
	I0923 12:09:17.804850 2935695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:09:17.805748 2935695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:09:17.806143 2935695 out.go:352] Setting JSON to false
	I0923 12:09:17.807134 2935695 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":157905,"bootTime":1726935453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 12:09:17.807209 2935695 start.go:139] virtualization:  
	I0923 12:09:17.810937 2935695 out.go:177] * [functional-100969] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0923 12:09:17.813159 2935695 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 12:09:17.813291 2935695 notify.go:220] Checking for updates...
	I0923 12:09:17.817519 2935695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:09:17.819693 2935695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:09:17.821706 2935695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 12:09:17.823789 2935695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 12:09:17.826004 2935695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:09:17.828763 2935695 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:09:17.829302 2935695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:09:17.855468 2935695 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 12:09:17.855595 2935695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:09:17.914253 2935695 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 12:09:17.904190957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:09:17.914366 2935695 docker.go:318] overlay module found
	I0923 12:09:17.916662 2935695 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 12:09:17.918501 2935695 start.go:297] selected driver: docker
	I0923 12:09:17.918521 2935695 start.go:901] validating driver "docker" against &{Name:functional-100969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-100969 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:09:17.918646 2935695 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:09:17.921867 2935695 out.go:201] 
	W0923 12:09:17.923924 2935695 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 12:09:17.925673 2935695 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-100969 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-100969 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-95n8x" [44c5b4ce-2050-45d7-834d-12792fc88739] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-95n8x" [44c5b4ce-2050-45d7-834d-12792fc88739] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003452771s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32525
functional_test.go:1675: http://192.168.49.2:32525: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-95n8x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32525
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cd1efeb9-51bf-493c-8a36-3d9272e2aa5a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00405846s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-100969 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-100969 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-100969 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-100969 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5976646e-4646-4161-8ed8-36de8b78f97a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5976646e-4646-4161-8ed8-36de8b78f97a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003908173s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-100969 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-100969 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-100969 delete -f testdata/storage-provisioner/pod.yaml: (1.235300656s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-100969 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5c6c9b2f-419f-4bff-ba6f-f6c788b3c233] Pending
helpers_test.go:344: "sp-pod" [5c6c9b2f-419f-4bff-ba6f-f6c788b3c233] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5c6c9b2f-419f-4bff-ba6f-f6c788b3c233] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003742601s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-100969 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh -n functional-100969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cp functional-100969:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3219926065/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh -n functional-100969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh -n functional-100969 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2903158/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /etc/test/nested/copy/2903158/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2903158.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /etc/ssl/certs/2903158.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2903158.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /usr/share/ca-certificates/2903158.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/29031582.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /etc/ssl/certs/29031582.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/29031582.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /usr/share/ca-certificates/29031582.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-100969 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh "sudo systemctl is-active docker": exit status 1 (333.532381ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh "sudo systemctl is-active crio": exit status 1 (367.42884ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 version -o=json --components: (1.302202079s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-100969 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-100969
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-100969
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-100969 image ls --format short --alsologtostderr:
I0923 12:09:26.677467 2937317 out.go:345] Setting OutFile to fd 1 ...
I0923 12:09:26.677675 2937317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:26.677687 2937317 out.go:358] Setting ErrFile to fd 2...
I0923 12:09:26.677694 2937317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:26.678008 2937317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
I0923 12:09:26.678835 2937317 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:26.679069 2937317 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:26.679751 2937317 cli_runner.go:164] Run: docker container inspect functional-100969 --format={{.State.Status}}
I0923 12:09:26.697810 2937317 ssh_runner.go:195] Run: systemctl --version
I0923 12:09:26.697875 2937317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-100969
I0923 12:09:26.715389 2937317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41807 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/functional-100969/id_rsa Username:docker}
I0923 12:09:26.808221 2937317 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-100969 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-100969  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-100969  | sha256:f34797 | 992B   |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-100969 image ls --format table --alsologtostderr:
I0923 12:09:27.650415 2937495 out.go:345] Setting OutFile to fd 1 ...
I0923 12:09:27.650592 2937495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:27.650597 2937495 out.go:358] Setting ErrFile to fd 2...
I0923 12:09:27.650602 2937495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:27.650851 2937495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
I0923 12:09:27.651576 2937495 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:27.651721 2937495 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:27.652213 2937495 cli_runner.go:164] Run: docker container inspect functional-100969 --format={{.State.Status}}
I0923 12:09:27.677231 2937495 ssh_runner.go:195] Run: systemctl --version
I0923 12:09:27.677293 2937495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-100969
I0923 12:09:27.703122 2937495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41807 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/functional-100969/id_rsa Username:docker}
I0923 12:09:27.808328 2937495 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-100969 image ls --format json --alsologtostderr:
[{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/cored
ns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-100969"],"size":"2173567"},{"id":"sha256:b887aca7aed6134b0294
01507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:f3479749b482d7a03cf9591e4a8801fd7feb1ba07915bfce158ff16159a06a57","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-100969"],"size":"992"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/e
tcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-100969 image ls --format json --alsologtostderr:
I0923 12:09:27.366118 2937436 out.go:345] Setting OutFile to fd 1 ...
I0923 12:09:27.366249 2937436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:27.366259 2937436 out.go:358] Setting ErrFile to fd 2...
I0923 12:09:27.366265 2937436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:27.366518 2937436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
I0923 12:09:27.367180 2937436 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:27.367358 2937436 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:27.367877 2937436 cli_runner.go:164] Run: docker container inspect functional-100969 --format={{.State.Status}}
I0923 12:09:27.395646 2937436 ssh_runner.go:195] Run: systemctl --version
I0923 12:09:27.395703 2937436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-100969
I0923 12:09:27.423473 2937436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41807 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/functional-100969/id_rsa Username:docker}
I0923 12:09:27.524360 2937436 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-100969 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:f3479749b482d7a03cf9591e4a8801fd7feb1ba07915bfce158ff16159a06a57
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-100969
size: "992"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-100969
size: "2173567"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-100969 image ls --format yaml --alsologtostderr:
I0923 12:09:26.900071 2937348 out.go:345] Setting OutFile to fd 1 ...
I0923 12:09:26.900211 2937348 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:26.900223 2937348 out.go:358] Setting ErrFile to fd 2...
I0923 12:09:26.900230 2937348 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:26.900592 2937348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
I0923 12:09:26.901515 2937348 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:26.901665 2937348 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:26.902395 2937348 cli_runner.go:164] Run: docker container inspect functional-100969 --format={{.State.Status}}
I0923 12:09:26.920391 2937348 ssh_runner.go:195] Run: systemctl --version
I0923 12:09:26.920489 2937348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-100969
I0923 12:09:26.937750 2937348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41807 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/functional-100969/id_rsa Username:docker}
I0923 12:09:27.028145 2937348 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh pgrep buildkitd
2024/09/23 12:09:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh pgrep buildkitd: exit status 1 (305.685603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image build -t localhost/my-image:functional-100969 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 image build -t localhost/my-image:functional-100969 testdata/build --alsologtostderr: (3.317322484s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-100969 image build -t localhost/my-image:functional-100969 testdata/build --alsologtostderr:
I0923 12:09:27.444564 2937449 out.go:345] Setting OutFile to fd 1 ...
I0923 12:09:27.445750 2937449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:27.445771 2937449 out.go:358] Setting ErrFile to fd 2...
I0923 12:09:27.445779 2937449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:27.446068 2937449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
I0923 12:09:27.446767 2937449 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:27.447672 2937449 config.go:182] Loaded profile config "functional-100969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0923 12:09:27.448335 2937449 cli_runner.go:164] Run: docker container inspect functional-100969 --format={{.State.Status}}
I0923 12:09:27.469967 2937449 ssh_runner.go:195] Run: systemctl --version
I0923 12:09:27.470035 2937449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-100969
I0923 12:09:27.486938 2937449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41807 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/functional-100969/id_rsa Username:docker}
I0923 12:09:27.581474 2937449 build_images.go:161] Building image from path: /tmp/build.698940436.tar
I0923 12:09:27.581591 2937449 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 12:09:27.592533 2937449 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.698940436.tar
I0923 12:09:27.596884 2937449 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.698940436.tar: stat -c "%s %y" /var/lib/minikube/build/build.698940436.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.698940436.tar': No such file or directory
I0923 12:09:27.596914 2937449 ssh_runner.go:362] scp /tmp/build.698940436.tar --> /var/lib/minikube/build/build.698940436.tar (3072 bytes)
I0923 12:09:27.625497 2937449 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.698940436
I0923 12:09:27.634647 2937449 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.698940436 -xf /var/lib/minikube/build/build.698940436.tar
I0923 12:09:27.648333 2937449 containerd.go:394] Building image: /var/lib/minikube/build/build.698940436
I0923 12:09:27.648417 2937449 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.698940436 --local dockerfile=/var/lib/minikube/build/build.698940436 --output type=image,name=localhost/my-image:functional-100969
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ddc610da5d15b1b008b6fa9b119475b4a35dc4de900e3d2c9662c95266fd76b0 0.0s done
#8 exporting config sha256:3ea65feddd6b31040d3a5c180e8864a0c53d3e91c760ea310e4dc5837cad9ea4 0.0s done
#8 naming to localhost/my-image:functional-100969 done
#8 DONE 0.2s
I0923 12:09:30.670157 2937449 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.698940436 --local dockerfile=/var/lib/minikube/build/build.698940436 --output type=image,name=localhost/my-image:functional-100969: (3.021697008s)
I0923 12:09:30.670227 2937449 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.698940436
I0923 12:09:30.682082 2937449 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.698940436.tar
I0923 12:09:30.691434 2937449 build_images.go:217] Built localhost/my-image:functional-100969 from /tmp/build.698940436.tar
I0923 12:09:30.691468 2937449 build_images.go:133] succeeded building to: functional-100969
I0923 12:09:30.691474 2937449 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-100969
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image load --daemon kicbase/echo-server:functional-100969 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 image load --daemon kicbase/echo-server:functional-100969 --alsologtostderr: (1.195270447s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image load --daemon kicbase/echo-server:functional-100969 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 image load --daemon kicbase/echo-server:functional-100969 --alsologtostderr: (1.060250738s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-100969
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image load --daemon kicbase/echo-server:functional-100969 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-100969 image load --daemon kicbase/echo-server:functional-100969 --alsologtostderr: (1.223808604s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "403.893771ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "72.938548ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "443.711093ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "71.552719ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image save kicbase/echo-server:functional-100969 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-100969 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-100969 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-100969 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-100969 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2933109: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image rm kicbase/echo-server:functional-100969 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-100969 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-100969 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [10d42e7a-45bd-4c15-a034-55dc87ff1444] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [10d42e7a-45bd-4c15-a034-55dc87ff1444] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004208236s
I0923 12:08:55.942694 2903158 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-100969
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 image save --daemon kicbase/echo-server:functional-100969 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-100969
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-100969 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.82.206 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-100969 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-100969 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-100969 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7xjdd" [6e164b9f-3af7-4198-bb39-197df860071a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7xjdd" [6e164b9f-3af7-4198-bb39-197df860071a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00505417s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdany-port626452006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727093352933536306" to /tmp/TestFunctionalparallelMountCmdany-port626452006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727093352933536306" to /tmp/TestFunctionalparallelMountCmdany-port626452006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727093352933536306" to /tmp/TestFunctionalparallelMountCmdany-port626452006/001/test-1727093352933536306
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.998558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:09:13.274888 2903158 retry.go:31] will retry after 262.572936ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 12:09 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 12:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 12:09 test-1727093352933536306
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh cat /mount-9p/test-1727093352933536306
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-100969 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [400fb6ae-e9ff-4ac9-b174-f282aa1187a8] Pending
helpers_test.go:344: "busybox-mount" [400fb6ae-e9ff-4ac9-b174-f282aa1187a8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [400fb6ae-e9ff-4ac9-b174-f282aa1187a8] Running
helpers_test.go:344: "busybox-mount" [400fb6ae-e9ff-4ac9-b174-f282aa1187a8] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [400fb6ae-e9ff-4ac9-b174-f282aa1187a8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00448509s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-100969 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdany-port626452006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 service list
E0923 12:09:14.244979 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 service list -o json
functional_test.go:1494: Took "656.066851ms" to run "out/minikube-linux-arm64 -p functional-100969 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30587
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30587
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdspecific-port4259370332/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.652434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:09:21.171736 2903158 retry.go:31] will retry after 255.211781ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdspecific-port4259370332/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh "sudo umount -f /mount-9p": exit status 1 (291.768961ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-100969 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdspecific-port4259370332/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2647362168/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2647362168/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2647362168/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T" /mount1: exit status 1 (614.788742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:09:23.147150 2903158 retry.go:31] will retry after 488.324619ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-100969 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-100969 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2647362168/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2647362168/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-100969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2647362168/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-100969
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-100969
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-100969
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (120.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-869098 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 12:10:36.167023 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-869098 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m59.149093421s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (120.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-869098 -- rollout status deployment/busybox: (29.233626311s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-6zslk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-p8w64 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-tjlng -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-6zslk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-p8w64 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-tjlng -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-6zslk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-p8w64 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-tjlng -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-6zslk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-6zslk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-p8w64 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-p8w64 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-tjlng -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-869098 -- exec busybox-7dff88458-tjlng -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-869098 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-869098 -v=7 --alsologtostderr: (23.606662218s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr: (1.005209119s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-869098 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019662674s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp testdata/cp-test.txt ha-869098:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile807195696/001/cp-test_ha-869098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098:/home/docker/cp-test.txt ha-869098-m02:/home/docker/cp-test_ha-869098_ha-869098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test_ha-869098_ha-869098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098:/home/docker/cp-test.txt ha-869098-m03:/home/docker/cp-test_ha-869098_ha-869098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test_ha-869098_ha-869098-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098:/home/docker/cp-test.txt ha-869098-m04:/home/docker/cp-test_ha-869098_ha-869098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test_ha-869098_ha-869098-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp testdata/cp-test.txt ha-869098-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile807195696/001/cp-test_ha-869098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m02:/home/docker/cp-test.txt ha-869098:/home/docker/cp-test_ha-869098-m02_ha-869098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test_ha-869098-m02_ha-869098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m02:/home/docker/cp-test.txt ha-869098-m03:/home/docker/cp-test_ha-869098-m02_ha-869098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test_ha-869098-m02_ha-869098-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m02:/home/docker/cp-test.txt ha-869098-m04:/home/docker/cp-test_ha-869098-m02_ha-869098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test_ha-869098-m02_ha-869098-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp testdata/cp-test.txt ha-869098-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile807195696/001/cp-test_ha-869098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m03:/home/docker/cp-test.txt ha-869098:/home/docker/cp-test_ha-869098-m03_ha-869098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test_ha-869098-m03_ha-869098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m03:/home/docker/cp-test.txt ha-869098-m02:/home/docker/cp-test_ha-869098-m03_ha-869098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test_ha-869098-m03_ha-869098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m03:/home/docker/cp-test.txt ha-869098-m04:/home/docker/cp-test_ha-869098-m03_ha-869098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test_ha-869098-m03_ha-869098-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp testdata/cp-test.txt ha-869098-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile807195696/001/cp-test_ha-869098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m04:/home/docker/cp-test.txt ha-869098:/home/docker/cp-test_ha-869098-m04_ha-869098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098 "sudo cat /home/docker/cp-test_ha-869098-m04_ha-869098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m04:/home/docker/cp-test.txt ha-869098-m02:/home/docker/cp-test_ha-869098-m04_ha-869098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m02 "sudo cat /home/docker/cp-test_ha-869098-m04_ha-869098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 cp ha-869098-m04:/home/docker/cp-test.txt ha-869098-m03:/home/docker/cp-test_ha-869098-m04_ha-869098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 ssh -n ha-869098-m03 "sudo cat /home/docker/cp-test_ha-869098-m04_ha-869098-m03.txt"
E0923 12:12:52.304028 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 node stop m02 -v=7 --alsologtostderr: (12.206583417s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr: exit status 7 (755.528103ms)

                                                
                                                
-- stdout --
	ha-869098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-869098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-869098-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-869098-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:13:04.811977 2953700 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:13:04.812127 2953700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:13:04.812139 2953700 out.go:358] Setting ErrFile to fd 2...
	I0923 12:13:04.812144 2953700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:13:04.812417 2953700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:13:04.812600 2953700 out.go:352] Setting JSON to false
	I0923 12:13:04.812628 2953700 mustload.go:65] Loading cluster: ha-869098
	I0923 12:13:04.812717 2953700 notify.go:220] Checking for updates...
	I0923 12:13:04.813087 2953700 config.go:182] Loaded profile config "ha-869098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:13:04.813109 2953700 status.go:174] checking status of ha-869098 ...
	I0923 12:13:04.813652 2953700 cli_runner.go:164] Run: docker container inspect ha-869098 --format={{.State.Status}}
	I0923 12:13:04.831925 2953700 status.go:364] ha-869098 host status = "Running" (err=<nil>)
	I0923 12:13:04.831948 2953700 host.go:66] Checking if "ha-869098" exists ...
	I0923 12:13:04.832258 2953700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-869098
	I0923 12:13:04.850807 2953700 host.go:66] Checking if "ha-869098" exists ...
	I0923 12:13:04.851125 2953700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:13:04.851180 2953700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-869098
	I0923 12:13:04.882019 2953700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41812 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/ha-869098/id_rsa Username:docker}
	I0923 12:13:04.981956 2953700 ssh_runner.go:195] Run: systemctl --version
	I0923 12:13:04.986556 2953700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:13:05.000169 2953700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:13:05.077426 2953700 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-23 12:13:05.066624157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:13:05.078167 2953700 kubeconfig.go:125] found "ha-869098" server: "https://192.168.49.254:8443"
	I0923 12:13:05.078209 2953700 api_server.go:166] Checking apiserver status ...
	I0923 12:13:05.078262 2953700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:13:05.091125 2953700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0923 12:13:05.102444 2953700 api_server.go:182] apiserver freezer: "4:freezer:/docker/248fdef9de4ad145fa2dac88a9054d77f7e5c83f9c74c66d44dfc97ba170e2e4/kubepods/burstable/podf03b6ddba98b1e2c2e970a1512419e87/73f0674b01e08f0e0d21c66bd364f227593cdf315942496917705234d5926af7"
	I0923 12:13:05.102525 2953700 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/248fdef9de4ad145fa2dac88a9054d77f7e5c83f9c74c66d44dfc97ba170e2e4/kubepods/burstable/podf03b6ddba98b1e2c2e970a1512419e87/73f0674b01e08f0e0d21c66bd364f227593cdf315942496917705234d5926af7/freezer.state
	I0923 12:13:05.113282 2953700 api_server.go:204] freezer state: "THAWED"
	I0923 12:13:05.113314 2953700 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 12:13:05.121238 2953700 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 12:13:05.121269 2953700 status.go:456] ha-869098 apiserver status = Running (err=<nil>)
	I0923 12:13:05.121281 2953700 status.go:176] ha-869098 status: &{Name:ha-869098 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:13:05.121299 2953700 status.go:174] checking status of ha-869098-m02 ...
	I0923 12:13:05.121635 2953700 cli_runner.go:164] Run: docker container inspect ha-869098-m02 --format={{.State.Status}}
	I0923 12:13:05.138441 2953700 status.go:364] ha-869098-m02 host status = "Stopped" (err=<nil>)
	I0923 12:13:05.138465 2953700 status.go:377] host is not running, skipping remaining checks
	I0923 12:13:05.138473 2953700 status.go:176] ha-869098-m02 status: &{Name:ha-869098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:13:05.138494 2953700 status.go:174] checking status of ha-869098-m03 ...
	I0923 12:13:05.138811 2953700 cli_runner.go:164] Run: docker container inspect ha-869098-m03 --format={{.State.Status}}
	I0923 12:13:05.158643 2953700 status.go:364] ha-869098-m03 host status = "Running" (err=<nil>)
	I0923 12:13:05.158673 2953700 host.go:66] Checking if "ha-869098-m03" exists ...
	I0923 12:13:05.159009 2953700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-869098-m03
	I0923 12:13:05.178474 2953700 host.go:66] Checking if "ha-869098-m03" exists ...
	I0923 12:13:05.178945 2953700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:13:05.178998 2953700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-869098-m03
	I0923 12:13:05.198053 2953700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41822 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/ha-869098-m03/id_rsa Username:docker}
	I0923 12:13:05.293251 2953700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:13:05.305369 2953700 kubeconfig.go:125] found "ha-869098" server: "https://192.168.49.254:8443"
	I0923 12:13:05.305397 2953700 api_server.go:166] Checking apiserver status ...
	I0923 12:13:05.305464 2953700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:13:05.318617 2953700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	I0923 12:13:05.335686 2953700 api_server.go:182] apiserver freezer: "4:freezer:/docker/21b1a8fcf92d0e73b10dd8464c35a533ece0e9930b0d810930f0beae7ccfab0a/kubepods/burstable/podc09d08217d631e6a19fc0c9fec55bc1d/9459664764c03f317ad2166a92b59ff0e76aac270803229d3e93c14a61499bfa"
	I0923 12:13:05.335779 2953700 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/21b1a8fcf92d0e73b10dd8464c35a533ece0e9930b0d810930f0beae7ccfab0a/kubepods/burstable/podc09d08217d631e6a19fc0c9fec55bc1d/9459664764c03f317ad2166a92b59ff0e76aac270803229d3e93c14a61499bfa/freezer.state
	I0923 12:13:05.346416 2953700 api_server.go:204] freezer state: "THAWED"
	I0923 12:13:05.346514 2953700 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 12:13:05.354687 2953700 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 12:13:05.354718 2953700 status.go:456] ha-869098-m03 apiserver status = Running (err=<nil>)
	I0923 12:13:05.354729 2953700 status.go:176] ha-869098-m03 status: &{Name:ha-869098-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:13:05.354784 2953700 status.go:174] checking status of ha-869098-m04 ...
	I0923 12:13:05.355098 2953700 cli_runner.go:164] Run: docker container inspect ha-869098-m04 --format={{.State.Status}}
	I0923 12:13:05.372193 2953700 status.go:364] ha-869098-m04 host status = "Running" (err=<nil>)
	I0923 12:13:05.372230 2953700 host.go:66] Checking if "ha-869098-m04" exists ...
	I0923 12:13:05.372544 2953700 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-869098-m04
	I0923 12:13:05.390708 2953700 host.go:66] Checking if "ha-869098-m04" exists ...
	I0923 12:13:05.391157 2953700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:13:05.391203 2953700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-869098-m04
	I0923 12:13:05.407888 2953700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41827 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/ha-869098-m04/id_rsa Username:docker}
	I0923 12:13:05.500380 2953700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:13:05.512542 2953700 status.go:176] ha-869098-m04 status: &{Name:ha-869098-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 node start m02 -v=7 --alsologtostderr
E0923 12:13:20.015265 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 node start m02 -v=7 --alsologtostderr: (31.064212659s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr: (1.03878019s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.011484685s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-869098 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-869098 -v=7 --alsologtostderr
E0923 12:13:46.479545 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:46.486233 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:46.497976 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:46.519467 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:46.560969 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:46.642450 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:46.804001 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:47.125568 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:47.767577 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:49.048948 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:51.611479 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:56.733750 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:14:06.975304 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-869098 -v=7 --alsologtostderr: (37.285232384s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-869098 --wait=true -v=7 --alsologtostderr
E0923 12:14:27.457532 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:08.419269 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-869098 --wait=true -v=7 --alsologtostderr: (1m38.141754556s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-869098
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (135.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 node delete m03 -v=7 --alsologtostderr: (9.887891271s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 stop -v=7 --alsologtostderr
E0923 12:16:30.346433 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 stop -v=7 --alsologtostderr: (35.932585092s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr: exit status 7 (107.050131ms)

                                                
                                                
-- stdout --
	ha-869098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-869098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-869098-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:16:42.630620 2968113 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:16:42.630791 2968113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:16:42.630816 2968113 out.go:358] Setting ErrFile to fd 2...
	I0923 12:16:42.630822 2968113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:16:42.631200 2968113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:16:42.631606 2968113 out.go:352] Setting JSON to false
	I0923 12:16:42.631642 2968113 mustload.go:65] Loading cluster: ha-869098
	I0923 12:16:42.631886 2968113 notify.go:220] Checking for updates...
	I0923 12:16:42.632085 2968113 config.go:182] Loaded profile config "ha-869098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:16:42.632105 2968113 status.go:174] checking status of ha-869098 ...
	I0923 12:16:42.633018 2968113 cli_runner.go:164] Run: docker container inspect ha-869098 --format={{.State.Status}}
	I0923 12:16:42.650664 2968113 status.go:364] ha-869098 host status = "Stopped" (err=<nil>)
	I0923 12:16:42.650687 2968113 status.go:377] host is not running, skipping remaining checks
	I0923 12:16:42.650695 2968113 status.go:176] ha-869098 status: &{Name:ha-869098 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:16:42.650737 2968113 status.go:174] checking status of ha-869098-m02 ...
	I0923 12:16:42.651054 2968113 cli_runner.go:164] Run: docker container inspect ha-869098-m02 --format={{.State.Status}}
	I0923 12:16:42.671744 2968113 status.go:364] ha-869098-m02 host status = "Stopped" (err=<nil>)
	I0923 12:16:42.671770 2968113 status.go:377] host is not running, skipping remaining checks
	I0923 12:16:42.671778 2968113 status.go:176] ha-869098-m02 status: &{Name:ha-869098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:16:42.671796 2968113 status.go:174] checking status of ha-869098-m04 ...
	I0923 12:16:42.672094 2968113 cli_runner.go:164] Run: docker container inspect ha-869098-m04 --format={{.State.Status}}
	I0923 12:16:42.688272 2968113 status.go:364] ha-869098-m04 host status = "Stopped" (err=<nil>)
	I0923 12:16:42.688301 2968113 status.go:377] host is not running, skipping remaining checks
	I0923 12:16:42.688309 2968113 status.go:176] ha-869098-m04 status: &{Name:ha-869098-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (43.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-869098 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-869098 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (42.353648793s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (43.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-869098 --control-plane -v=7 --alsologtostderr
E0923 12:17:52.303983 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-869098 --control-plane -v=7 --alsologtostderr: (43.238106139s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-869098 status -v=7 --alsologtostderr: (1.055321244s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-719821 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0923 12:18:46.478053 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-719821 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (47.695351391s)
--- PASS: TestJSONOutput/start/Command (47.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-719821 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-719821 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-719821 --output=json --user=testUser
E0923 12:19:14.188507 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-719821 --output=json --user=testUser: (5.733627825s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-973801 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-973801 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.430008ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f025a6dd-2b4e-4187-9788-a9f4a0d167c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-973801] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"88630040-ca7f-43a6-b82f-dfad35dab689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19688"}}
	{"specversion":"1.0","id":"b4c3727f-ddef-47af-b318-746956765904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3f69c550-7070-4bd1-bf34-caa77748f3ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig"}}
	{"specversion":"1.0","id":"3353e794-cf61-47d0-a251-f980e530aea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube"}}
	{"specversion":"1.0","id":"1118d0be-5f83-40f2-beff-200f9b0af6b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"90089dd6-ccea-43ba-933a-7680ee6cb3f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a83a871-86b0-43a0-b99c-fdd3727bac16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-973801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-973801
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-617382 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-617382 --network=: (37.482027354s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-617382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-617382
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-617382: (2.111579359s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-031772 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-031772 --network=bridge: (37.284014594s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-031772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-031772
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-031772: (1.967059374s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.27s)

                                                
                                    
x
+
TestKicExistingNetwork (34.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 12:20:38.366347 2903158 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 12:20:38.383738 2903158 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 12:20:38.383814 2903158 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 12:20:38.383833 2903158 cli_runner.go:164] Run: docker network inspect existing-network
W0923 12:20:38.397753 2903158 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 12:20:38.397782 2903158 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 12:20:38.397798 2903158 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 12:20:38.397903 2903158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 12:20:38.418205 2903158 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95222ab31dd0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:42:b2:dd:ab} reservation:<nil>}
I0923 12:20:38.418552 2903158 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c214b0}
I0923 12:20:38.418575 2903158 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 12:20:38.418628 2903158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 12:20:38.492309 2903158 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-330683 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-330683 --network=existing-network: (32.48685251s)
helpers_test.go:175: Cleaning up "existing-network-330683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-330683
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-330683: (2.003637795s)
I0923 12:21:12.998743 2903158 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.65s)

                                                
                                    
x
+
TestKicCustomSubnet (32.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-977710 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-977710 --subnet=192.168.60.0/24: (30.265562548s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-977710 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-977710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-977710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-977710: (2.138058793s)
--- PASS: TestKicCustomSubnet (32.43s)

                                                
                                    
x
+
TestKicStaticIP (33.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-844053 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-844053 --static-ip=192.168.200.200: (31.584237854s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-844053 ip
helpers_test.go:175: Cleaning up "static-ip-844053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-844053
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-844053: (2.119910922s)
--- PASS: TestKicStaticIP (33.84s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-111912 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-111912 --driver=docker  --container-runtime=containerd: (28.260754733s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-114735 --driver=docker  --container-runtime=containerd
E0923 12:22:52.304304 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-114735 --driver=docker  --container-runtime=containerd: (33.752405088s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-111912
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-114735
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-114735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-114735
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-114735: (1.980568488s)
helpers_test.go:175: Cleaning up "first-111912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-111912
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-111912: (2.22200396s)
--- PASS: TestMinikubeProfile (67.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-603287 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-603287 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.571411005s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-603287 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-605347 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-605347 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.113896738s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-605347 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-603287 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-603287 --alsologtostderr -v=5: (1.599367122s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-605347 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-605347
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-605347: (1.370464914s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-605347
E0923 12:23:46.478305 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-605347: (6.209025857s)
--- PASS: TestMountStart/serial/RestartStopped (7.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-605347 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361858 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 12:24:15.376830 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361858 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.736587938s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-361858 -- rollout status deployment/busybox: (17.414520055s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-ggrq5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-j94mf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-ggrq5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-j94mf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-ggrq5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-j94mf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-ggrq5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-ggrq5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-j94mf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-361858 -- exec busybox-7dff88458-j94mf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-361858 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-361858 -v 3 --alsologtostderr: (17.825987693s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-361858 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp testdata/cp-test.txt multinode-361858:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3698457069/001/cp-test_multinode-361858.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858:/home/docker/cp-test.txt multinode-361858-m02:/home/docker/cp-test_multinode-361858_multinode-361858-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m02 "sudo cat /home/docker/cp-test_multinode-361858_multinode-361858-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858:/home/docker/cp-test.txt multinode-361858-m03:/home/docker/cp-test_multinode-361858_multinode-361858-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m03 "sudo cat /home/docker/cp-test_multinode-361858_multinode-361858-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp testdata/cp-test.txt multinode-361858-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3698457069/001/cp-test_multinode-361858-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858-m02:/home/docker/cp-test.txt multinode-361858:/home/docker/cp-test_multinode-361858-m02_multinode-361858.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858 "sudo cat /home/docker/cp-test_multinode-361858-m02_multinode-361858.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858-m02:/home/docker/cp-test.txt multinode-361858-m03:/home/docker/cp-test_multinode-361858-m02_multinode-361858-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m03 "sudo cat /home/docker/cp-test_multinode-361858-m02_multinode-361858-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp testdata/cp-test.txt multinode-361858-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3698457069/001/cp-test_multinode-361858-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858-m03:/home/docker/cp-test.txt multinode-361858:/home/docker/cp-test_multinode-361858-m03_multinode-361858.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858 "sudo cat /home/docker/cp-test_multinode-361858-m03_multinode-361858.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 cp multinode-361858-m03:/home/docker/cp-test.txt multinode-361858-m02:/home/docker/cp-test_multinode-361858-m03_multinode-361858-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 ssh -n multinode-361858-m02 "sudo cat /home/docker/cp-test_multinode-361858-m03_multinode-361858-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-361858 node stop m03: (1.220705696s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361858 status: exit status 7 (533.753809ms)

                                                
                                                
-- stdout --
	multinode-361858
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-361858-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-361858-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr: exit status 7 (512.37107ms)

                                                
                                                
-- stdout --
	multinode-361858
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-361858-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-361858-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:25:53.365007 3021534 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:25:53.365144 3021534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:25:53.365155 3021534 out.go:358] Setting ErrFile to fd 2...
	I0923 12:25:53.365160 3021534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:25:53.365421 3021534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:25:53.365626 3021534 out.go:352] Setting JSON to false
	I0923 12:25:53.365671 3021534 mustload.go:65] Loading cluster: multinode-361858
	I0923 12:25:53.365755 3021534 notify.go:220] Checking for updates...
	I0923 12:25:53.366118 3021534 config.go:182] Loaded profile config "multinode-361858": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:25:53.366139 3021534 status.go:174] checking status of multinode-361858 ...
	I0923 12:25:53.366994 3021534 cli_runner.go:164] Run: docker container inspect multinode-361858 --format={{.State.Status}}
	I0923 12:25:53.385438 3021534 status.go:364] multinode-361858 host status = "Running" (err=<nil>)
	I0923 12:25:53.385466 3021534 host.go:66] Checking if "multinode-361858" exists ...
	I0923 12:25:53.385795 3021534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361858
	I0923 12:25:53.411897 3021534 host.go:66] Checking if "multinode-361858" exists ...
	I0923 12:25:53.412241 3021534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:25:53.412297 3021534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361858
	I0923 12:25:53.432747 3021534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41932 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/multinode-361858/id_rsa Username:docker}
	I0923 12:25:53.529034 3021534 ssh_runner.go:195] Run: systemctl --version
	I0923 12:25:53.533854 3021534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:25:53.545913 3021534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:25:53.603816 3021534 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-23 12:25:53.593536322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:25:53.604417 3021534 kubeconfig.go:125] found "multinode-361858" server: "https://192.168.67.2:8443"
	I0923 12:25:53.604452 3021534 api_server.go:166] Checking apiserver status ...
	I0923 12:25:53.604504 3021534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:25:53.617297 3021534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	I0923 12:25:53.626888 3021534 api_server.go:182] apiserver freezer: "4:freezer:/docker/33240b9c5046bb62cb7996eabe6d309eea75cb54a08b1dcdd6782d96819c94da/kubepods/burstable/pod6575dc773c69f2b8e036830fef3141c4/a9bba67a024c57c0a80396a3d7e04f0ff51d39890b0ee95e7ee761991725c752"
	I0923 12:25:53.626963 3021534 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/33240b9c5046bb62cb7996eabe6d309eea75cb54a08b1dcdd6782d96819c94da/kubepods/burstable/pod6575dc773c69f2b8e036830fef3141c4/a9bba67a024c57c0a80396a3d7e04f0ff51d39890b0ee95e7ee761991725c752/freezer.state
	I0923 12:25:53.636140 3021534 api_server.go:204] freezer state: "THAWED"
	I0923 12:25:53.636173 3021534 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0923 12:25:53.644746 3021534 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0923 12:25:53.644786 3021534 status.go:456] multinode-361858 apiserver status = Running (err=<nil>)
	I0923 12:25:53.644797 3021534 status.go:176] multinode-361858 status: &{Name:multinode-361858 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:25:53.644821 3021534 status.go:174] checking status of multinode-361858-m02 ...
	I0923 12:25:53.645148 3021534 cli_runner.go:164] Run: docker container inspect multinode-361858-m02 --format={{.State.Status}}
	I0923 12:25:53.661340 3021534 status.go:364] multinode-361858-m02 host status = "Running" (err=<nil>)
	I0923 12:25:53.661375 3021534 host.go:66] Checking if "multinode-361858-m02" exists ...
	I0923 12:25:53.661676 3021534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-361858-m02
	I0923 12:25:53.678659 3021534 host.go:66] Checking if "multinode-361858-m02" exists ...
	I0923 12:25:53.679019 3021534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:25:53.679067 3021534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-361858-m02
	I0923 12:25:53.696751 3021534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:41937 SSHKeyPath:/home/jenkins/minikube-integration/19688-2897765/.minikube/machines/multinode-361858-m02/id_rsa Username:docker}
	I0923 12:25:53.788448 3021534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:25:53.800512 3021534 status.go:176] multinode-361858-m02 status: &{Name:multinode-361858-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:25:53.800547 3021534 status.go:174] checking status of multinode-361858-m03 ...
	I0923 12:25:53.800855 3021534 cli_runner.go:164] Run: docker container inspect multinode-361858-m03 --format={{.State.Status}}
	I0923 12:25:53.818953 3021534 status.go:364] multinode-361858-m03 host status = "Stopped" (err=<nil>)
	I0923 12:25:53.818986 3021534 status.go:377] host is not running, skipping remaining checks
	I0923 12:25:53.819013 3021534 status.go:176] multinode-361858-m03 status: &{Name:multinode-361858-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-361858 node start m03 -v=7 --alsologtostderr: (9.451094836s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-361858
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-361858
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-361858: (24.971914442s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361858 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361858 --wait=true -v=8 --alsologtostderr: (1m14.311672729s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-361858
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-361858 node delete m03: (4.856630553s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 stop
E0923 12:27:52.304641 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-361858 stop: (23.789091859s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361858 status: exit status 7 (98.174509ms)

                                                
                                                
-- stdout --
	multinode-361858
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-361858-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr: exit status 7 (101.909072ms)

                                                
                                                
-- stdout --
	multinode-361858
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-361858-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:28:12.912893 3029934 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:28:12.913024 3029934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:12.913035 3029934 out.go:358] Setting ErrFile to fd 2...
	I0923 12:28:12.913040 3029934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:28:12.913290 3029934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:28:12.913497 3029934 out.go:352] Setting JSON to false
	I0923 12:28:12.913531 3029934 mustload.go:65] Loading cluster: multinode-361858
	I0923 12:28:12.913633 3029934 notify.go:220] Checking for updates...
	I0923 12:28:12.913966 3029934 config.go:182] Loaded profile config "multinode-361858": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:28:12.913981 3029934 status.go:174] checking status of multinode-361858 ...
	I0923 12:28:12.914582 3029934 cli_runner.go:164] Run: docker container inspect multinode-361858 --format={{.State.Status}}
	I0923 12:28:12.933491 3029934 status.go:364] multinode-361858 host status = "Stopped" (err=<nil>)
	I0923 12:28:12.933515 3029934 status.go:377] host is not running, skipping remaining checks
	I0923 12:28:12.933524 3029934 status.go:176] multinode-361858 status: &{Name:multinode-361858 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:28:12.933557 3029934 status.go:174] checking status of multinode-361858-m02 ...
	I0923 12:28:12.933916 3029934 cli_runner.go:164] Run: docker container inspect multinode-361858-m02 --format={{.State.Status}}
	I0923 12:28:12.963543 3029934 status.go:364] multinode-361858-m02 host status = "Stopped" (err=<nil>)
	I0923 12:28:12.963567 3029934 status.go:377] host is not running, skipping remaining checks
	I0923 12:28:12.963574 3029934 status.go:176] multinode-361858-m02 status: &{Name:multinode-361858-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361858 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0923 12:28:46.478515 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361858 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.897168273s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-361858 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-361858
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361858-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-361858-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.526492ms)

                                                
                                                
-- stdout --
	* [multinode-361858-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-361858-m02' is duplicated with machine name 'multinode-361858-m02' in profile 'multinode-361858'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-361858-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-361858-m03 --driver=docker  --container-runtime=containerd: (28.701342927s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-361858
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-361858: exit status 80 (368.961942ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-361858 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-361858-m03 already exists in multinode-361858-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-361858-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-361858-m03: (1.955606446s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.17s)

                                                
                                    
x
+
TestPreload (117.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-586437 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0923 12:30:09.550373 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-586437 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.36966711s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-586437 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-586437 image pull gcr.io/k8s-minikube/busybox: (1.97836491s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-586437
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-586437: (5.739758355s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-586437 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-586437 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (32.047743242s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-586437 image list
helpers_test.go:175: Cleaning up "test-preload-586437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-586437
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-586437: (2.350298934s)
--- PASS: TestPreload (117.74s)

                                                
                                    
x
+
TestScheduledStopUnix (107.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-566885 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-566885 --memory=2048 --driver=docker  --container-runtime=containerd: (30.872073721s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566885 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-566885 -n scheduled-stop-566885
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566885 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 12:32:09.828106 2903158 retry.go:31] will retry after 149.435µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.829336 2903158 retry.go:31] will retry after 220.783µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.830469 2903158 retry.go:31] will retry after 242.516µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.831586 2903158 retry.go:31] will retry after 372.08µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.832667 2903158 retry.go:31] will retry after 531.564µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.833785 2903158 retry.go:31] will retry after 983.33µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.834911 2903158 retry.go:31] will retry after 640.21µs: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.836029 2903158 retry.go:31] will retry after 1.636069ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.838196 2903158 retry.go:31] will retry after 3.341621ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.842238 2903158 retry.go:31] will retry after 5.386223ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.848606 2903158 retry.go:31] will retry after 5.743257ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.855238 2903158 retry.go:31] will retry after 5.371596ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.861462 2903158 retry.go:31] will retry after 14.646038ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.876647 2903158 retry.go:31] will retry after 28.556364ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
I0923 12:32:09.905836 2903158 retry.go:31] will retry after 38.946318ms: open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/scheduled-stop-566885/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566885 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-566885 -n scheduled-stop-566885
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-566885
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-566885 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0923 12:32:52.304641 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-566885
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-566885: exit status 7 (69.716011ms)

                                                
                                                
-- stdout --
	scheduled-stop-566885
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-566885 -n scheduled-stop-566885
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-566885 -n scheduled-stop-566885: exit status 7 (67.632321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-566885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-566885
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-566885: (4.803830934s)
--- PASS: TestScheduledStopUnix (107.23s)

                                                
                                    
x
+
TestInsufficientStorage (10.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-858883 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-858883 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.257586596s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e4b8440-d46a-4538-9d73-032b73c36397","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-858883] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d166623-7711-44c9-b547-2cced9601cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19688"}}
	{"specversion":"1.0","id":"d05f2c92-5958-49f2-8fe2-89f29a258c23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ede3883-d174-4860-a532-dc9cd056463a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig"}}
	{"specversion":"1.0","id":"71fbca78-0f16-4c68-8aac-ef64424ed1a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube"}}
	{"specversion":"1.0","id":"98348bb2-7086-4ef2-8baa-95d8ec164a52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7160f608-0108-49b7-9f24-c3fcecc11286","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6ad59519-bc11-4494-8a80-309e92306f65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"13272d51-d91b-4405-a4ba-cf689994c1bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4bb42410-f1ba-402e-8569-3f4103f9db2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3957bb54-dea5-4da4-8171-c0dd7c9beb62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"85b2365f-7818-4afd-98c5-2db4ccdf9aae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-858883\" primary control-plane node in \"insufficient-storage-858883\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c4c81d5-f3d2-4d63-b27f-2573846b9248","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a1b6950-c3ec-4822-a771-49dbd0de9b9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cb1a385-66c8-445f-8696-ccdf180204e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-858883 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-858883 --output=json --layout=cluster: exit status 7 (299.737973ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-858883","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-858883","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 12:33:34.218539 3048474 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-858883" does not appear in /home/jenkins/minikube-integration/19688-2897765/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-858883 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-858883 --output=json --layout=cluster: exit status 7 (293.365112ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-858883","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-858883","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 12:33:34.513177 3048537 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-858883" does not appear in /home/jenkins/minikube-integration/19688-2897765/kubeconfig
	E0923 12:33:34.523622 3048537 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/insufficient-storage-858883/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-858883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-858883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-858883: (1.894286918s)
--- PASS: TestInsufficientStorage (10.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4052151879 start -p running-upgrade-995205 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4052151879 start -p running-upgrade-995205 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.373511387s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-995205 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-995205 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.859052994s)
helpers_test.go:175: Cleaning up "running-upgrade-995205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-995205
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-995205: (3.095596863s)
--- PASS: TestRunningBinaryUpgrade (82.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (104.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.241793352s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-136777
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-136777: (1.246711991s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-136777 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-136777 status --format={{.Host}}: exit status 7 (81.179139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.694327694s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-136777 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (129.542128ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-136777] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-136777
	    minikube start -p kubernetes-upgrade-136777 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1367772 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-136777 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-136777 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.258589586s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-136777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-136777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-136777: (2.387161941s)
--- PASS: TestKubernetesUpgrade (104.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (186.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.989417129 start -p missing-upgrade-714403 --memory=2200 --driver=docker  --container-runtime=containerd
E0923 12:33:46.478293 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.989417129 start -p missing-upgrade-714403 --memory=2200 --driver=docker  --container-runtime=containerd: (1m40.200727752s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-714403
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-714403: (10.319422512s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-714403
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-714403 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-714403 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.125703862s)
helpers_test.go:175: Cleaning up "missing-upgrade-714403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-714403
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-714403: (2.647954843s)
--- PASS: TestMissingContainerUpgrade (186.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910308 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-910308 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (76.694948ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-910308] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910308 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910308 --driver=docker  --container-runtime=containerd: (39.412711509s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-910308 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910308 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910308 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.045154788s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-910308 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-910308 status -o json: exit status 2 (383.829313ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-910308","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-910308
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-910308: (3.949331292s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910308 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910308 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.337173362s)
--- PASS: TestNoKubernetes/serial/Start (9.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-910308 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-910308 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.027641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-910308
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-910308: (1.191537738s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-910308 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-910308 --driver=docker  --container-runtime=containerd: (7.071680995s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-910308 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-910308 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.456116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1079115557 start -p stopped-upgrade-807189 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1079115557 start -p stopped-upgrade-807189 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (56.253835109s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1079115557 -p stopped-upgrade-807189 stop
E0923 12:37:52.307440 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1079115557 -p stopped-upgrade-807189 stop: (20.262300813s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-807189 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-807189 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.482586484s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.00s)

                                                
                                    
x
+
TestPause/serial/Start (69.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-600856 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0923 12:38:46.479593 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-600856 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m9.763175843s)
--- PASS: TestPause/serial/Start (69.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-807189
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-807189: (1.164774428s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-600856 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-600856 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.011163166s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-600856 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-600856 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-600856 --output=json --layout=cluster: exit status 2 (380.719214ms)

                                                
                                                
-- stdout --
	{"Name":"pause-600856","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-600856","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-600856 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-600856 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-600856 --alsologtostderr -v=5: (1.027180767s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-600856 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-600856 --alsologtostderr -v=5: (2.849295528s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-600856
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-600856: exit status 1 (19.653012ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-600856: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-492817 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-492817 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (272.538419ms)

                                                
                                                
-- stdout --
	* [false-492817] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19688
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:39:34.498248 3084982 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:39:34.498457 3084982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:39:34.498485 3084982 out.go:358] Setting ErrFile to fd 2...
	I0923 12:39:34.498504 3084982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:39:34.498823 3084982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19688-2897765/.minikube/bin
	I0923 12:39:34.499311 3084982 out.go:352] Setting JSON to false
	I0923 12:39:34.500409 3084982 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":159722,"bootTime":1726935453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0923 12:39:34.500514 3084982 start.go:139] virtualization:  
	I0923 12:39:34.504665 3084982 out.go:177] * [false-492817] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 12:39:34.507072 3084982 out.go:177]   - MINIKUBE_LOCATION=19688
	I0923 12:39:34.507133 3084982 notify.go:220] Checking for updates...
	I0923 12:39:34.512568 3084982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:39:34.514586 3084982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19688-2897765/kubeconfig
	I0923 12:39:34.516848 3084982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19688-2897765/.minikube
	I0923 12:39:34.518935 3084982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 12:39:34.521722 3084982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:39:34.524738 3084982 config.go:182] Loaded profile config "force-systemd-flag-410328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0923 12:39:34.524843 3084982 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:39:34.582579 3084982 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 12:39:34.582726 3084982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:39:34.672972 3084982 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 12:39:34.657714519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 12:39:34.673098 3084982 docker.go:318] overlay module found
	I0923 12:39:34.676040 3084982 out.go:177] * Using the docker driver based on user configuration
	I0923 12:39:34.678183 3084982 start.go:297] selected driver: docker
	I0923 12:39:34.678198 3084982 start.go:901] validating driver "docker" against <nil>
	I0923 12:39:34.678221 3084982 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:39:34.681237 3084982 out.go:201] 
	W0923 12:39:34.683498 3084982 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0923 12:39:34.685580 3084982 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-492817 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-492817" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-492817

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-492817"

                                                
                                                
----------------------- debugLogs end: false-492817 [took: 5.05166304s] --------------------------------
helpers_test.go:175: Cleaning up "false-492817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-492817
--- PASS: TestNetworkPlugins/group/false (5.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-605045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0923 12:42:52.304441 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-605045 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m33.883938245s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-569169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-569169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m16.754423418s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-605045 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2993d14-acdd-49a0-8888-bd25b793ecf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2993d14-acdd-49a0-8888-bd25b793ecf8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004154856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-605045 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-605045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-605045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.171544241s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-605045 describe deploy/metrics-server -n kube-system
E0923 12:43:46.478067 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-605045 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-605045 --alsologtostderr -v=3: (14.101871523s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-605045 -n old-k8s-version-605045
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-605045 -n old-k8s-version-605045: exit status 7 (96.876497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-605045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-569169 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1bc9202-3c9b-42b6-8555-410c4fc6e55c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1bc9202-3c9b-42b6-8555-410c4fc6e55c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004452496s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-569169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-569169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-569169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095125998s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-569169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-569169 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-569169 --alsologtostderr -v=3: (12.099678815s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-569169 -n no-preload-569169
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-569169 -n no-preload-569169: exit status 7 (71.087675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-569169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-569169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 12:46:49.552307 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:47:52.304708 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:48:46.477907 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-569169 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m48.818276052s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-569169 -n no-preload-569169
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vg9zc" [ec16de92-28c0-42de-b59f-81c35021763e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003546632s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vg9zc" [ec16de92-28c0-42de-b59f-81c35021763e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003333954s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-569169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-569169 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-569169 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-569169 -n no-preload-569169
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-569169 -n no-preload-569169: exit status 2 (384.2171ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-569169 -n no-preload-569169
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-569169 -n no-preload-569169: exit status 2 (462.886245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-569169 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-569169 -n no-preload-569169
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-569169 -n no-preload-569169
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kbkhr" [490bf36e-a542-4a2c-a50d-dd28aa388c7d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004432621s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-700526 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-700526 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m6.240052541s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kbkhr" [490bf36e-a542-4a2c-a50d-dd28aa388c7d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005883316s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-605045 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-605045 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-605045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-605045 --alsologtostderr -v=1: (1.030631643s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-605045 -n old-k8s-version-605045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-605045 -n old-k8s-version-605045: exit status 2 (418.46075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-605045 -n old-k8s-version-605045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-605045 -n old-k8s-version-605045: exit status 2 (373.09757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-605045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-605045 -n old-k8s-version-605045
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-605045 -n old-k8s-version-605045
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-009872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-009872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m37.427124958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-700526 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f0bc15e-7cf0-4a7c-974e-cd670cab5dad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f0bc15e-7cf0-4a7c-974e-cd670cab5dad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003970523s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-700526 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-700526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-700526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028725442s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-700526 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-700526 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-700526 --alsologtostderr -v=3: (12.142973093s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-700526 -n embed-certs-700526
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-700526 -n embed-certs-700526: exit status 7 (76.491054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-700526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-700526 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-700526 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.196724304s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-700526 -n embed-certs-700526
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-009872 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [005865cf-3034-4e17-867f-f832d9ccddfc] Pending
helpers_test.go:344: "busybox" [005865cf-3034-4e17-867f-f832d9ccddfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [005865cf-3034-4e17-867f-f832d9ccddfc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005195262s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-009872 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-009872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-009872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058571539s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-009872 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-009872 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-009872 --alsologtostderr -v=3: (12.142294767s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872: exit status 7 (73.903415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-009872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-009872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0923 12:52:52.304601 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:34.887254 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:34.893835 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:34.905299 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:34.926876 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:34.968416 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:35.049866 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:35.211564 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:35.533440 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:36.174835 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:37.456781 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:40.019129 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:45.141924 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:46.477606 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:53:55.383361 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:15.865072 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.127354 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.133818 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.145300 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.166732 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.208211 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.289661 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.451121 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:51.772759 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:52.414070 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:53.696151 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:56.257735 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:54:56.826889 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:55:01.379724 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:55:11.621972 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:55:32.103625 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:56:13.065004 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:56:18.748551 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-009872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m3.918504136s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7qx4m" [ae710d16-309e-4e35-a3db-fce41f80f2eb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003181059s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7qx4m" [ae710d16-309e-4e35-a3db-fce41f80f2eb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004133453s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-700526 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-700526 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-700526 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-700526 -n embed-certs-700526
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-700526 -n embed-certs-700526: exit status 2 (314.445635ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-700526 -n embed-certs-700526
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-700526 -n embed-certs-700526: exit status 2 (322.272561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-700526 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-700526 -n embed-certs-700526
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-700526 -n embed-certs-700526
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-595266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-595266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (36.988393827s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-595266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-595266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069125912s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-595266 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-595266 --alsologtostderr -v=3: (1.254539934s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-595266 -n newest-cni-595266
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-595266 -n newest-cni-595266: exit status 7 (75.163748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-595266 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-595266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-595266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (16.702370354s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-595266 -n newest-cni-595266
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-595266 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-595266 --alsologtostderr -v=1
E0923 12:57:34.989196 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:35.379938 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-595266 -n newest-cni-595266
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-595266 -n newest-cni-595266: exit status 2 (332.227542ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-595266 -n newest-cni-595266
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-595266 -n newest-cni-595266: exit status 2 (317.200453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-595266 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-595266 -n newest-cni-595266
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-595266 -n newest-cni-595266
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m1.620212494s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w6zpv" [134d818f-dc81-4b03-b988-3455f55ec291] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004682474s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w6zpv" [134d818f-dc81-4b03-b988-3455f55ec291] Running
E0923 12:57:52.303953 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/addons-348379/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005706542s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-009872 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-009872 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-009872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-009872 --alsologtostderr -v=1: (1.061118785s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872: exit status 2 (422.797225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872: exit status 2 (445.036122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-009872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-009872 -n default-k8s-diff-port-009872
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.31s)
E0923 13:02:58.126542 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:29.553815 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:34.887355 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:39.088797 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:42.766190 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:42.772667 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:42.784074 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:42.805460 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:42.846966 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:42.928351 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:43.089936 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:43.411543 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:44.053879 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:45.336180 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:46.478284 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:03:47.898342 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0923 12:58:34.886529 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/old-k8s-version-605045/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m3.172324561s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-492817 "pgrep -a kubelet"
I0923 12:58:42.420533 2903158 config.go:182] Loaded profile config "auto-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-492817 replace --force -f testdata/netcat-deployment.yaml
I0923 12:58:42.799044 2903158 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s8mgq" [3f317439-8a79-4eae-ab29-e1010f61bfd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 12:58:46.478595 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/functional-100969/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-s8mgq" [3f317439-8a79-4eae-ab29-e1010f61bfd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.0033647s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-b5zcv" [927a0754-51b5-4cd6-a61e-8b9a9769f38b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004076823s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-492817 "pgrep -a kubelet"
I0923 12:59:13.848892 2903158 config.go:182] Loaded profile config "kindnet-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-492817 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8w5ql" [a311ffdd-8a1b-4bca-867b-7a3f9368dd8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8w5ql" [a311ffdd-8a1b-4bca-867b-7a3f9368dd8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004661829s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m13.577601727s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0923 12:59:51.127085 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:00:18.830584 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/no-preload-569169/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.426826764s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-885dt" [8e28e6be-e835-4130-9c75-24bd6ddb9e00] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004577368s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-492817 "pgrep -a kubelet"
I0923 13:00:34.408779 2903158 config.go:182] Loaded profile config "calico-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-492817 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mfbgh" [bfb8ef92-2bd7-434f-a2cc-bda3783c3bae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mfbgh" [bfb8ef92-2bd7-434f-a2cc-bda3783c3bae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004658973s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-492817 "pgrep -a kubelet"
I0923 13:00:42.385164 2903158 config.go:182] Loaded profile config "custom-flannel-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-492817 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fcn4j" [1d760dd0-d769-43fc-823d-fd90d8e6b292] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fcn4j" [1d760dd0-d769-43fc-823d-fd90d8e6b292] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004996422s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (49.63951338s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.104400299s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-492817 "pgrep -a kubelet"
I0923 13:01:59.399236 2903158 config.go:182] Loaded profile config "enable-default-cni-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-492817 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vttps" [40c69342-a58f-4f81-b761-cf5e51ffdb58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vttps" [40c69342-a58f-4f81-b761-cf5e51ffdb58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004450839s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9dwzr" [47c271ba-a94b-4036-9c2d-e10eb2bbcebf] Running
E0923 13:02:17.146464 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.152857 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.164668 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.186314 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.227671 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.309874 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.471584 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:17.792832 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:02:18.435074 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004317571s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-492817 "pgrep -a kubelet"
I0923 13:02:19.425936 2903158 config.go:182] Loaded profile config "flannel-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-492817 replace --force -f testdata/netcat-deployment.yaml
E0923 13:02:19.717048 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9nkhz" [71fe8e50-69fe-4e2c-b497-74174b182545] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 13:02:22.280351 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9nkhz" [71fe8e50-69fe-4e2c-b497-74174b182545] Running
E0923 13:02:27.402441 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/default-k8s-diff-port-009872/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003866591s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-492817 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m21.149655989s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-492817 "pgrep -a kubelet"
I0923 13:03:52.594389 2903158 config.go:182] Loaded profile config "bridge-492817": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-492817 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7nb8s" [69a3a2c9-77fd-4ad2-9556-c78e83111652] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 13:03:53.020587 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7nb8s" [69a3a2c9-77fd-4ad2-9556-c78e83111652] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003756768s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-492817 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-492817 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0923 13:04:03.261917 2903158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19688-2897765/.minikube/profiles/auto-492817/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-021793 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-021793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-021793
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-225527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-225527
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-492817 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-492817" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-492817

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-492817"

                                                
                                                
----------------------- debugLogs end: kubenet-492817 [took: 4.513759999s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-492817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-492817
--- SKIP: TestNetworkPlugins/group/kubenet (4.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-492817 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-492817" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-492817

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-492817" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-492817"

                                                
                                                
----------------------- debugLogs end: cilium-492817 [took: 5.473222148s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-492817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-492817
--- SKIP: TestNetworkPlugins/group/cilium (5.73s)

                                                
                                    
Copied to clipboard