Test Report: Docker_Linux_containerd_arm64 19546

                    
                      9c905d7ddc6fcb24a41b70e16c9a4a5dd3740602:2024-10-04:36493
                    
                

Test fail (2/329)

Order failed test Duration
29 TestAddons/serial/Volcano 211.27
303 TestStartStop/group/old-k8s-version/serial/SecondStart 372.72
x
+
TestAddons/serial/Volcano (211.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:814: volcano-scheduler stabilized in 54.516843ms
addons_test.go:822: volcano-admission stabilized in 55.346369ms
addons_test.go:830: volcano-controller stabilized in 55.632406ms
addons_test.go:836: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-djjtz" [f231bba5-086d-4fcd-8605-e67dba103be4] Running
addons_test.go:836: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003541307s
addons_test.go:840: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-6h88w" [67c0b434-b1ff-46b7-8ade-7157d3db403a] Running
addons_test.go:840: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003956587s
addons_test.go:844: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-4smn5" [c55ed3ba-e8c3-4c90-863f-6ea71c4e2113] Running
addons_test.go:844: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003705564s
addons_test.go:849: (dbg) Run:  kubectl --context addons-813566 delete -n volcano-system job volcano-admission-init
addons_test.go:855: (dbg) Run:  kubectl --context addons-813566 create -f testdata/vcjob.yaml
addons_test.go:863: (dbg) Run:  kubectl --context addons-813566 get vcjob -n my-volcano
addons_test.go:881: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [914121bb-67c3-46c4-8b87-f57194853801] Pending
helpers_test.go:344: "test-job-nginx-0" [914121bb-67c3-46c4-8b87-f57194853801] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:881: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:881: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-813566 -n addons-813566
addons_test.go:881: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-04 02:54:55.657615459 +0000 UTC m=+427.524988726
addons_test.go:881: (dbg) Run:  kubectl --context addons-813566 describe po test-job-nginx-0 -n my-volcano
addons_test.go:881: (dbg) kubectl --context addons-813566 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-384bca0b-ce34-41ed-acf2-b7d961e0909e
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-46wcl (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-46wcl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:881: (dbg) Run:  kubectl --context addons-813566 logs test-job-nginx-0 -n my-volcano
addons_test.go:881: (dbg) kubectl --context addons-813566 logs test-job-nginx-0 -n my-volcano:
addons_test.go:882: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-813566
helpers_test.go:235: (dbg) docker inspect addons-813566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "444c6b599521ba25c9d715a290a097030d6883548c354ccf30cda00446ac87fb",
	        "Created": "2024-10-04T02:48:27.122401309Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1156055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T02:48:27.25403229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/444c6b599521ba25c9d715a290a097030d6883548c354ccf30cda00446ac87fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/444c6b599521ba25c9d715a290a097030d6883548c354ccf30cda00446ac87fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/444c6b599521ba25c9d715a290a097030d6883548c354ccf30cda00446ac87fb/hosts",
	        "LogPath": "/var/lib/docker/containers/444c6b599521ba25c9d715a290a097030d6883548c354ccf30cda00446ac87fb/444c6b599521ba25c9d715a290a097030d6883548c354ccf30cda00446ac87fb-json.log",
	        "Name": "/addons-813566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-813566:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-813566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f490c9b6ad43f1f8fa4c8978570115450dd0d10278378474c41dfd0578f42852-init/diff:/var/lib/docker/overlay2/3fd4f374838913cfff21eeb0320112c1c5932de8178b660a56df0e13b7402d74/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f490c9b6ad43f1f8fa4c8978570115450dd0d10278378474c41dfd0578f42852/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f490c9b6ad43f1f8fa4c8978570115450dd0d10278378474c41dfd0578f42852/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f490c9b6ad43f1f8fa4c8978570115450dd0d10278378474c41dfd0578f42852/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-813566",
	                "Source": "/var/lib/docker/volumes/addons-813566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-813566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-813566",
	                "name.minikube.sigs.k8s.io": "addons-813566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "825182b9827a38f19f543604af870415469290c48168c9f1560f9a98d262bb40",
	            "SandboxKey": "/var/run/docker/netns/825182b9827a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34252"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-813566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "790ebc512132e18e545528d648e91e69439e82b35187bc045eb35930565f1a2d",
	                    "EndpointID": "a68989a89c46c13d039c8a0954dda46499091ea2f2312918b8bc975fd6a9459e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-813566",
	                        "444c6b599521"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-813566 -n addons-813566
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 logs -n 25: (1.645967063s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-188351   | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |                     |
	|         | -p download-only-188351              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC | 04 Oct 24 02:47 UTC |
	| delete  | -p download-only-188351              | download-only-188351   | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC | 04 Oct 24 02:47 UTC |
	| start   | -o=json --download-only              | download-only-577482   | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |                     |
	|         | -p download-only-577482              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| delete  | -p download-only-577482              | download-only-577482   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| delete  | -p download-only-188351              | download-only-188351   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| delete  | -p download-only-577482              | download-only-577482   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                   | download-docker-341067 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | download-docker-341067               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-341067            | download-docker-341067 | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| start   | --download-only -p                   | binary-mirror-350152   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | binary-mirror-350152                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37997               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-350152              | binary-mirror-350152   | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:48 UTC |
	| addons  | disable dashboard -p                 | addons-813566          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-813566                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-813566          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC |                     |
	|         | addons-813566                        |                        |         |         |                     |                     |
	| start   | -p addons-813566 --wait=true         | addons-813566          | jenkins | v1.34.0 | 04 Oct 24 02:48 UTC | 04 Oct 24 02:51 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=logviewer                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:48:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:48:02.353903 1155571 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:48:02.354079 1155571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:02.354109 1155571 out.go:358] Setting ErrFile to fd 2...
	I1004 02:48:02.354130 1155571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:48:02.354399 1155571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 02:48:02.354883 1155571 out.go:352] Setting JSON to false
	I1004 02:48:02.355823 1155571 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23431,"bootTime":1727986652,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 02:48:02.355925 1155571 start.go:139] virtualization:  
	I1004 02:48:02.358478 1155571 out.go:177] * [addons-813566] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 02:48:02.360801 1155571 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 02:48:02.360907 1155571 notify.go:220] Checking for updates...
	I1004 02:48:02.364940 1155571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:48:02.366869 1155571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 02:48:02.368586 1155571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 02:48:02.370555 1155571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 02:48:02.372565 1155571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:48:02.374647 1155571 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:48:02.406198 1155571 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:48:02.406327 1155571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:48:02.455891 1155571 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-04 02:48:02.445735579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:48:02.456004 1155571 docker.go:318] overlay module found
	I1004 02:48:02.459169 1155571 out.go:177] * Using the docker driver based on user configuration
	I1004 02:48:02.460785 1155571 start.go:297] selected driver: docker
	I1004 02:48:02.460808 1155571 start.go:901] validating driver "docker" against <nil>
	I1004 02:48:02.460832 1155571 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:48:02.461519 1155571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:48:02.514251 1155571 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-04 02:48:02.504580573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:48:02.514460 1155571 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:48:02.514739 1155571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:48:02.516518 1155571 out.go:177] * Using Docker driver with root privileges
	I1004 02:48:02.518370 1155571 cni.go:84] Creating CNI manager for ""
	I1004 02:48:02.518434 1155571 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 02:48:02.518449 1155571 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:48:02.518547 1155571 start.go:340] cluster config:
	{Name:addons-813566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-813566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:02.520745 1155571 out.go:177] * Starting "addons-813566" primary control-plane node in "addons-813566" cluster
	I1004 02:48:02.522508 1155571 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1004 02:48:02.524483 1155571 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 02:48:02.526007 1155571 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:48:02.526159 1155571 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 02:48:02.526191 1155571 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1004 02:48:02.526202 1155571 cache.go:56] Caching tarball of preloaded images
	I1004 02:48:02.526281 1155571 preload.go:172] Found /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1004 02:48:02.526297 1155571 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1004 02:48:02.526687 1155571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/config.json ...
	I1004 02:48:02.526714 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/config.json: {Name:mk6d37db0fec3fc576579e55fd9b669e5cde6409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:02.542042 1155571 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:48:02.542160 1155571 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:48:02.542184 1155571 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1004 02:48:02.542190 1155571 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1004 02:48:02.542202 1155571 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1004 02:48:02.542207 1155571 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1004 02:48:19.667534 1155571 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1004 02:48:19.667591 1155571 cache.go:194] Successfully downloaded all kic artifacts
	I1004 02:48:19.667630 1155571 start.go:360] acquireMachinesLock for addons-813566: {Name:mk526a19afb6520edf3fa10bbb3b4f54bca7b4bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:48:19.668230 1155571 start.go:364] duration metric: took 559.62µs to acquireMachinesLock for "addons-813566"
	I1004 02:48:19.668272 1155571 start.go:93] Provisioning new machine with config: &{Name:addons-813566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-813566 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1004 02:48:19.668417 1155571 start.go:125] createHost starting for "" (driver="docker")
	I1004 02:48:19.670494 1155571 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1004 02:48:19.670732 1155571 start.go:159] libmachine.API.Create for "addons-813566" (driver="docker")
	I1004 02:48:19.670768 1155571 client.go:168] LocalClient.Create starting
	I1004 02:48:19.670881 1155571 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem
	I1004 02:48:20.395833 1155571 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem
	I1004 02:48:20.759551 1155571 cli_runner.go:164] Run: docker network inspect addons-813566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1004 02:48:20.776037 1155571 cli_runner.go:211] docker network inspect addons-813566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1004 02:48:20.776120 1155571 network_create.go:284] running [docker network inspect addons-813566] to gather additional debugging logs...
	I1004 02:48:20.776141 1155571 cli_runner.go:164] Run: docker network inspect addons-813566
	W1004 02:48:20.791073 1155571 cli_runner.go:211] docker network inspect addons-813566 returned with exit code 1
	I1004 02:48:20.791109 1155571 network_create.go:287] error running [docker network inspect addons-813566]: docker network inspect addons-813566: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-813566 not found
	I1004 02:48:20.791123 1155571 network_create.go:289] output of [docker network inspect addons-813566]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-813566 not found
	
	** /stderr **
	I1004 02:48:20.791231 1155571 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 02:48:20.806656 1155571 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004e6cd0}
	I1004 02:48:20.806701 1155571 network_create.go:124] attempt to create docker network addons-813566 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1004 02:48:20.806762 1155571 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-813566 addons-813566
	I1004 02:48:20.873662 1155571 network_create.go:108] docker network addons-813566 192.168.49.0/24 created
	I1004 02:48:20.873695 1155571 kic.go:121] calculated static IP "192.168.49.2" for the "addons-813566" container
	I1004 02:48:20.873772 1155571 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1004 02:48:20.887041 1155571 cli_runner.go:164] Run: docker volume create addons-813566 --label name.minikube.sigs.k8s.io=addons-813566 --label created_by.minikube.sigs.k8s.io=true
	I1004 02:48:20.903683 1155571 oci.go:103] Successfully created a docker volume addons-813566
	I1004 02:48:20.903775 1155571 cli_runner.go:164] Run: docker run --rm --name addons-813566-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-813566 --entrypoint /usr/bin/test -v addons-813566:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1004 02:48:23.023789 1155571 cli_runner.go:217] Completed: docker run --rm --name addons-813566-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-813566 --entrypoint /usr/bin/test -v addons-813566:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (2.119970127s)
	I1004 02:48:23.023822 1155571 oci.go:107] Successfully prepared a docker volume addons-813566
	I1004 02:48:23.023843 1155571 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 02:48:23.023863 1155571 kic.go:194] Starting extracting preloaded images to volume ...
	I1004 02:48:23.023933 1155571 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-813566:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1004 02:48:27.050860 1155571 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-813566:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.026871433s)
	I1004 02:48:27.050890 1155571 kic.go:203] duration metric: took 4.027024646s to extract preloaded images to volume ...
	W1004 02:48:27.051033 1155571 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1004 02:48:27.051149 1155571 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1004 02:48:27.108228 1155571 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-813566 --name addons-813566 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-813566 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-813566 --network addons-813566 --ip 192.168.49.2 --volume addons-813566:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1004 02:48:27.397808 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Running}}
	I1004 02:48:27.414988 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:27.438414 1155571 cli_runner.go:164] Run: docker exec addons-813566 stat /var/lib/dpkg/alternatives/iptables
	I1004 02:48:27.508142 1155571 oci.go:144] the created container "addons-813566" has a running status.
	I1004 02:48:27.508169 1155571 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa...
	I1004 02:48:27.829000 1155571 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1004 02:48:27.855734 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:27.888477 1155571 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1004 02:48:27.888502 1155571 kic_runner.go:114] Args: [docker exec --privileged addons-813566 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1004 02:48:27.974977 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:28.009379 1155571 machine.go:93] provisionDockerMachine start ...
	I1004 02:48:28.009489 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:28.035418 1155571 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.035686 1155571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34252 <nil> <nil>}
	I1004 02:48:28.035697 1155571 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 02:48:28.213988 1155571 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-813566
	
	I1004 02:48:28.214013 1155571 ubuntu.go:169] provisioning hostname "addons-813566"
	I1004 02:48:28.214112 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:28.240165 1155571 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.240414 1155571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34252 <nil> <nil>}
	I1004 02:48:28.240428 1155571 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-813566 && echo "addons-813566" | sudo tee /etc/hostname
	I1004 02:48:28.393096 1155571 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-813566
	
	I1004 02:48:28.393207 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:28.413030 1155571 main.go:141] libmachine: Using SSH client type: native
	I1004 02:48:28.413271 1155571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34252 <nil> <nil>}
	I1004 02:48:28.413287 1155571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-813566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-813566/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-813566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:48:28.552383 1155571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:48:28.552458 1155571 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-1149434/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-1149434/.minikube}
	I1004 02:48:28.552510 1155571 ubuntu.go:177] setting up certificates
	I1004 02:48:28.552562 1155571 provision.go:84] configureAuth start
	I1004 02:48:28.552687 1155571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-813566
	I1004 02:48:28.568692 1155571 provision.go:143] copyHostCerts
	I1004 02:48:28.568777 1155571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.pem (1078 bytes)
	I1004 02:48:28.568903 1155571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-1149434/.minikube/cert.pem (1123 bytes)
	I1004 02:48:28.568967 1155571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-1149434/.minikube/key.pem (1679 bytes)
	I1004 02:48:28.569019 1155571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca-key.pem org=jenkins.addons-813566 san=[127.0.0.1 192.168.49.2 addons-813566 localhost minikube]
	I1004 02:48:29.048849 1155571 provision.go:177] copyRemoteCerts
	I1004 02:48:29.048943 1155571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:48:29.048997 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:29.064778 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:29.161007 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1004 02:48:29.185202 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:48:29.209938 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 02:48:29.234014 1155571 provision.go:87] duration metric: took 681.409418ms to configureAuth
	I1004 02:48:29.234045 1155571 ubuntu.go:193] setting minikube options for container-runtime
	I1004 02:48:29.234232 1155571 config.go:182] Loaded profile config "addons-813566": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 02:48:29.234247 1155571 machine.go:96] duration metric: took 1.224847764s to provisionDockerMachine
	I1004 02:48:29.234254 1155571 client.go:171] duration metric: took 9.563478142s to LocalClient.Create
	I1004 02:48:29.234273 1155571 start.go:167] duration metric: took 9.5635416s to libmachine.API.Create "addons-813566"
	I1004 02:48:29.234286 1155571 start.go:293] postStartSetup for "addons-813566" (driver="docker")
	I1004 02:48:29.234296 1155571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:48:29.234356 1155571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:48:29.234409 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:29.250777 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:29.346327 1155571 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:48:29.349553 1155571 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 02:48:29.349590 1155571 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 02:48:29.349602 1155571 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 02:48:29.349612 1155571 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 02:48:29.349627 1155571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-1149434/.minikube/addons for local assets ...
	I1004 02:48:29.349712 1155571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-1149434/.minikube/files for local assets ...
	I1004 02:48:29.349740 1155571 start.go:296] duration metric: took 115.447495ms for postStartSetup
	I1004 02:48:29.350057 1155571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-813566
	I1004 02:48:29.366954 1155571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/config.json ...
	I1004 02:48:29.367246 1155571 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 02:48:29.367300 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:29.384550 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:29.477620 1155571 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 02:48:29.482549 1155571 start.go:128] duration metric: took 9.814110601s to createHost
	I1004 02:48:29.482582 1155571 start.go:83] releasing machines lock for "addons-813566", held for 9.81433144s
	I1004 02:48:29.482670 1155571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-813566
	I1004 02:48:29.502621 1155571 ssh_runner.go:195] Run: cat /version.json
	I1004 02:48:29.502654 1155571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:48:29.502676 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:29.502719 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:29.523388 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:29.525905 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:29.619980 1155571 ssh_runner.go:195] Run: systemctl --version
	I1004 02:48:29.751279 1155571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 02:48:29.755824 1155571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1004 02:48:29.780616 1155571 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1004 02:48:29.780704 1155571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:48:29.808126 1155571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1004 02:48:29.808152 1155571 start.go:495] detecting cgroup driver to use...
	I1004 02:48:29.808187 1155571 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 02:48:29.808250 1155571 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1004 02:48:29.821043 1155571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1004 02:48:29.832874 1155571 docker.go:217] disabling cri-docker service (if available) ...
	I1004 02:48:29.832999 1155571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:48:29.847334 1155571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:48:29.863318 1155571 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:48:29.963388 1155571 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:48:30.133888 1155571 docker.go:233] disabling docker service ...
	I1004 02:48:30.133974 1155571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:48:30.158221 1155571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:48:30.174326 1155571 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:48:30.272465 1155571 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:48:30.364358 1155571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:48:30.376685 1155571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:48:30.393100 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1004 02:48:30.403637 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1004 02:48:30.414316 1155571 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1004 02:48:30.414395 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1004 02:48:30.424485 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1004 02:48:30.434240 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1004 02:48:30.444167 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1004 02:48:30.454065 1155571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:48:30.464135 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1004 02:48:30.474640 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1004 02:48:30.484747 1155571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1004 02:48:30.494679 1155571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:48:30.503069 1155571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:48:30.511269 1155571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:30.598818 1155571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1004 02:48:30.738807 1155571 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1004 02:48:30.738894 1155571 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1004 02:48:30.742759 1155571 start.go:563] Will wait 60s for crictl version
	I1004 02:48:30.742831 1155571 ssh_runner.go:195] Run: which crictl
	I1004 02:48:30.746103 1155571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:48:30.781480 1155571 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1004 02:48:30.781561 1155571 ssh_runner.go:195] Run: containerd --version
	I1004 02:48:30.806800 1155571 ssh_runner.go:195] Run: containerd --version
	I1004 02:48:30.831183 1155571 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1004 02:48:30.833165 1155571 cli_runner.go:164] Run: docker network inspect addons-813566 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 02:48:30.847973 1155571 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1004 02:48:30.851708 1155571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:48:30.862289 1155571 kubeadm.go:883] updating cluster {Name:addons-813566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-813566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 02:48:30.862420 1155571 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 02:48:30.862481 1155571 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:48:30.903605 1155571 containerd.go:627] all images are preloaded for containerd runtime.
	I1004 02:48:30.903627 1155571 containerd.go:534] Images already preloaded, skipping extraction
	I1004 02:48:30.903685 1155571 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:48:30.943420 1155571 containerd.go:627] all images are preloaded for containerd runtime.
	I1004 02:48:30.943441 1155571 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:48:30.943449 1155571 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1004 02:48:30.943556 1155571 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-813566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-813566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 02:48:30.943633 1155571 ssh_runner.go:195] Run: sudo crictl info
	I1004 02:48:30.982187 1155571 cni.go:84] Creating CNI manager for ""
	I1004 02:48:30.982212 1155571 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 02:48:30.982222 1155571 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 02:48:30.982264 1155571 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-813566 NodeName:addons-813566 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:48:30.982418 1155571 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-813566"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:48:30.982500 1155571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1004 02:48:30.991107 1155571 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:48:30.991177 1155571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:48:30.999474 1155571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1004 02:48:31.020941 1155571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:48:31.038421 1155571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1004 02:48:31.056142 1155571 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1004 02:48:31.059787 1155571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:48:31.070912 1155571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:31.167041 1155571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:48:31.183765 1155571 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566 for IP: 192.168.49.2
	I1004 02:48:31.183828 1155571 certs.go:194] generating shared ca certs ...
	I1004 02:48:31.183875 1155571 certs.go:226] acquiring lock for ca certs: {Name:mkbb55aef12d0dc8daa9e4b13628be072878b5e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.184090 1155571 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.key
	I1004 02:48:31.508189 1155571 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt ...
	I1004 02:48:31.508227 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt: {Name:mk932e54e5c756220c3934468cafa316f4d1dda6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.508440 1155571 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.key ...
	I1004 02:48:31.508455 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.key: {Name:mk7b2c8106105dec4d0660d9ec565aec78ba27d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.508557 1155571 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.key
	I1004 02:48:31.628987 1155571 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.crt ...
	I1004 02:48:31.629014 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.crt: {Name:mkfc3e5c4845361b202eb21d71a204da49cd7789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.629548 1155571 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.key ...
	I1004 02:48:31.629568 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.key: {Name:mk46fef7edab00a6929d394641588d4d495af392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:31.630042 1155571 certs.go:256] generating profile certs ...
	I1004 02:48:31.630130 1155571 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.key
	I1004 02:48:31.630164 1155571 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt with IP's: []
	I1004 02:48:32.084329 1155571 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt ...
	I1004 02:48:32.084362 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: {Name:mk8b487e9b9650dded92b7bfa2c3eaee46548cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.084552 1155571 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.key ...
	I1004 02:48:32.084566 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.key: {Name:mk8eceb3aeedf6c1b4526bb52e947c4fccbf4a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.085157 1155571 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.key.6713c466
	I1004 02:48:32.085183 1155571 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.crt.6713c466 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1004 02:48:32.336230 1155571 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.crt.6713c466 ...
	I1004 02:48:32.336262 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.crt.6713c466: {Name:mke951d6d13565b10acc37e967e93d906ddedb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.336473 1155571 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.key.6713c466 ...
	I1004 02:48:32.336491 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.key.6713c466: {Name:mk4cdef59a0b8c37a9f87d5c0830d0861ba3a3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.337001 1155571 certs.go:381] copying /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.crt.6713c466 -> /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.crt
	I1004 02:48:32.337095 1155571 certs.go:385] copying /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.key.6713c466 -> /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.key
	I1004 02:48:32.337150 1155571 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.key
	I1004 02:48:32.337171 1155571 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.crt with IP's: []
	I1004 02:48:32.489991 1155571 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.crt ...
	I1004 02:48:32.490025 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.crt: {Name:mk9e936738a1e8e32de6c2ad705e6348c81023a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.491012 1155571 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.key ...
	I1004 02:48:32.491032 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.key: {Name:mk67725b2586c7c96974cd03fb443230c95bc824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:32.491788 1155571 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 02:48:32.491836 1155571 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem (1078 bytes)
	I1004 02:48:32.491861 1155571 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:48:32.491891 1155571 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/key.pem (1679 bytes)
	I1004 02:48:32.492526 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:48:32.523068 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:48:32.549776 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:48:32.573648 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:48:32.597454 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1004 02:48:32.625005 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 02:48:32.648434 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:48:32.671062 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 02:48:32.695172 1155571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:48:32.718811 1155571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:48:32.736273 1155571 ssh_runner.go:195] Run: openssl version
	I1004 02:48:32.741523 1155571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:48:32.751402 1155571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:32.754882 1155571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:32.754945 1155571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:48:32.761482 1155571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:48:32.770515 1155571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 02:48:32.773576 1155571 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1004 02:48:32.773677 1155571 kubeadm.go:392] StartCluster: {Name:addons-813566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-813566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:48:32.773763 1155571 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1004 02:48:32.773819 1155571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:48:32.810161 1155571 cri.go:89] found id: ""
	I1004 02:48:32.810236 1155571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:48:32.818512 1155571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:48:32.828351 1155571 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1004 02:48:32.828472 1155571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:48:32.842968 1155571 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:48:32.842991 1155571 kubeadm.go:157] found existing configuration files:
	
	I1004 02:48:32.843069 1155571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1004 02:48:32.852990 1155571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1004 02:48:32.853081 1155571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1004 02:48:32.862546 1155571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1004 02:48:32.872750 1155571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1004 02:48:32.872861 1155571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1004 02:48:32.885257 1155571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1004 02:48:32.894046 1155571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1004 02:48:32.894133 1155571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1004 02:48:32.902492 1155571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1004 02:48:32.910923 1155571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1004 02:48:32.910993 1155571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1004 02:48:32.919056 1155571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1004 02:48:32.957299 1155571 kubeadm.go:310] W1004 02:48:32.956677    1019 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:48:32.958146 1155571 kubeadm.go:310] W1004 02:48:32.957681    1019 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1004 02:48:32.977867 1155571 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1004 02:48:33.050825 1155571 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:48:49.554185 1155571 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1004 02:48:49.554246 1155571 kubeadm.go:310] [preflight] Running pre-flight checks
	I1004 02:48:49.554357 1155571 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1004 02:48:49.554424 1155571 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1004 02:48:49.554471 1155571 kubeadm.go:310] OS: Linux
	I1004 02:48:49.554529 1155571 kubeadm.go:310] CGROUPS_CPU: enabled
	I1004 02:48:49.554594 1155571 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1004 02:48:49.554651 1155571 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1004 02:48:49.554711 1155571 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1004 02:48:49.554760 1155571 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1004 02:48:49.554812 1155571 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1004 02:48:49.554858 1155571 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1004 02:48:49.554906 1155571 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1004 02:48:49.554953 1155571 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1004 02:48:49.555036 1155571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:48:49.555157 1155571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:48:49.555271 1155571 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1004 02:48:49.555341 1155571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:48:49.557311 1155571 out.go:235]   - Generating certificates and keys ...
	I1004 02:48:49.557418 1155571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1004 02:48:49.557497 1155571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1004 02:48:49.557567 1155571 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:48:49.557624 1155571 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:48:49.557683 1155571 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:48:49.557732 1155571 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1004 02:48:49.557785 1155571 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1004 02:48:49.557900 1155571 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-813566 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1004 02:48:49.557953 1155571 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1004 02:48:49.558068 1155571 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-813566 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1004 02:48:49.558132 1155571 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:48:49.558194 1155571 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:48:49.558237 1155571 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1004 02:48:49.558292 1155571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:48:49.558342 1155571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:48:49.558397 1155571 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1004 02:48:49.558450 1155571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:48:49.558512 1155571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:48:49.558565 1155571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:48:49.558646 1155571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:48:49.558710 1155571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:48:49.560343 1155571 out.go:235]   - Booting up control plane ...
	I1004 02:48:49.560483 1155571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:48:49.560586 1155571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:48:49.560661 1155571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:48:49.560769 1155571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:48:49.560869 1155571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:48:49.560918 1155571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1004 02:48:49.561090 1155571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1004 02:48:49.561205 1155571 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1004 02:48:49.561271 1155571 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502410482s
	I1004 02:48:49.561343 1155571 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1004 02:48:49.561419 1155571 kubeadm.go:310] [api-check] The API server is healthy after 6.002392233s
	I1004 02:48:49.561564 1155571 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:48:49.561696 1155571 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:48:49.561757 1155571 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:48:49.561933 1155571 kubeadm.go:310] [mark-control-plane] Marking the node addons-813566 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:48:49.561987 1155571 kubeadm.go:310] [bootstrap-token] Using token: nx5gr3.rbi329fd7xefr1xu
	I1004 02:48:49.563834 1155571 out.go:235]   - Configuring RBAC rules ...
	I1004 02:48:49.563956 1155571 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:48:49.564070 1155571 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:48:49.564212 1155571 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:48:49.564404 1155571 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:48:49.564521 1155571 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:48:49.564613 1155571 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:48:49.564729 1155571 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:48:49.564776 1155571 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1004 02:48:49.564827 1155571 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1004 02:48:49.564835 1155571 kubeadm.go:310] 
	I1004 02:48:49.564895 1155571 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1004 02:48:49.564903 1155571 kubeadm.go:310] 
	I1004 02:48:49.564979 1155571 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1004 02:48:49.564988 1155571 kubeadm.go:310] 
	I1004 02:48:49.565013 1155571 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1004 02:48:49.565075 1155571 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:48:49.565132 1155571 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:48:49.565141 1155571 kubeadm.go:310] 
	I1004 02:48:49.565194 1155571 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1004 02:48:49.565201 1155571 kubeadm.go:310] 
	I1004 02:48:49.565248 1155571 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:48:49.565256 1155571 kubeadm.go:310] 
	I1004 02:48:49.565308 1155571 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1004 02:48:49.565385 1155571 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:48:49.565455 1155571 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:48:49.565470 1155571 kubeadm.go:310] 
	I1004 02:48:49.565553 1155571 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:48:49.565634 1155571 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1004 02:48:49.565642 1155571 kubeadm.go:310] 
	I1004 02:48:49.565727 1155571 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nx5gr3.rbi329fd7xefr1xu \
	I1004 02:48:49.565832 1155571 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:31d380c2429cf173d3374745a61c9d6cef5d04ea5fcc015de732e21b950006dd \
	I1004 02:48:49.565857 1155571 kubeadm.go:310] 	--control-plane 
	I1004 02:48:49.565865 1155571 kubeadm.go:310] 
	I1004 02:48:49.565949 1155571 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:48:49.565956 1155571 kubeadm.go:310] 
	I1004 02:48:49.566036 1155571 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nx5gr3.rbi329fd7xefr1xu \
	I1004 02:48:49.566156 1155571 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:31d380c2429cf173d3374745a61c9d6cef5d04ea5fcc015de732e21b950006dd 
	I1004 02:48:49.566170 1155571 cni.go:84] Creating CNI manager for ""
	I1004 02:48:49.566177 1155571 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 02:48:49.567935 1155571 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 02:48:49.569501 1155571 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 02:48:49.573399 1155571 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1004 02:48:49.573421 1155571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1004 02:48:49.591692 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 02:48:49.864215 1155571 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:48:49.864381 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:49.864535 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-813566 minikube.k8s.io/updated_at=2024_10_04T02_48_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=addons-813566 minikube.k8s.io/primary=true
	I1004 02:48:50.037583 1155571 ops.go:34] apiserver oom_adj: -16
	I1004 02:48:50.037705 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:50.538415 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:51.038823 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:51.537844 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:52.038642 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:52.538804 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.038024 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.538353 1155571 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:48:53.629358 1155571 kubeadm.go:1113] duration metric: took 3.765023716s to wait for elevateKubeSystemPrivileges
	I1004 02:48:53.629385 1155571 kubeadm.go:394] duration metric: took 20.855714009s to StartCluster
	I1004 02:48:53.629401 1155571 settings.go:142] acquiring lock: {Name:mk1a349894ce66bafe43f883e774857dde6892e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:53.629520 1155571 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 02:48:53.629884 1155571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/kubeconfig: {Name:mkbb0a06a5c0d16e5af194939942d8ac82543668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:48:53.630499 1155571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:48:53.630517 1155571 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1004 02:48:53.630807 1155571 config.go:182] Loaded profile config "addons-813566": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 02:48:53.630856 1155571 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:true metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1004 02:48:53.630955 1155571 addons.go:69] Setting yakd=true in profile "addons-813566"
	I1004 02:48:53.630975 1155571 addons.go:234] Setting addon yakd=true in "addons-813566"
	I1004 02:48:53.631005 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.631521 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.632109 1155571 addons.go:69] Setting cloud-spanner=true in profile "addons-813566"
	I1004 02:48:53.632129 1155571 addons.go:234] Setting addon cloud-spanner=true in "addons-813566"
	I1004 02:48:53.632153 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.632612 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.635151 1155571 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-813566"
	I1004 02:48:53.635199 1155571 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-813566"
	I1004 02:48:53.635225 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.635635 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.635915 1155571 out.go:177] * Verifying Kubernetes components...
	I1004 02:48:53.636144 1155571 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-813566"
	I1004 02:48:53.636171 1155571 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-813566"
	I1004 02:48:53.636209 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.636776 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.639407 1155571 addons.go:69] Setting registry=true in profile "addons-813566"
	I1004 02:48:53.639435 1155571 addons.go:234] Setting addon registry=true in "addons-813566"
	I1004 02:48:53.639470 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.639934 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.652141 1155571 addons.go:69] Setting default-storageclass=true in profile "addons-813566"
	I1004 02:48:53.652237 1155571 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-813566"
	I1004 02:48:53.652723 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.665240 1155571 addons.go:69] Setting storage-provisioner=true in profile "addons-813566"
	I1004 02:48:53.665399 1155571 addons.go:234] Setting addon storage-provisioner=true in "addons-813566"
	I1004 02:48:53.665474 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.667901 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.678032 1155571 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-813566"
	I1004 02:48:53.678069 1155571 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-813566"
	I1004 02:48:53.678447 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.698864 1155571 addons.go:69] Setting volcano=true in profile "addons-813566"
	I1004 02:48:53.698900 1155571 addons.go:234] Setting addon volcano=true in "addons-813566"
	I1004 02:48:53.698943 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.699444 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.674871 1155571 addons.go:69] Setting gcp-auth=true in profile "addons-813566"
	I1004 02:48:53.699657 1155571 mustload.go:65] Loading cluster: addons-813566
	I1004 02:48:53.699841 1155571 config.go:182] Loaded profile config "addons-813566": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 02:48:53.700105 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.674884 1155571 addons.go:69] Setting ingress=true in profile "addons-813566"
	I1004 02:48:53.710073 1155571 addons.go:234] Setting addon ingress=true in "addons-813566"
	I1004 02:48:53.710123 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.712241 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.718144 1155571 addons.go:69] Setting volumesnapshots=true in profile "addons-813566"
	I1004 02:48:53.718173 1155571 addons.go:234] Setting addon volumesnapshots=true in "addons-813566"
	I1004 02:48:53.718213 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.718779 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.674898 1155571 addons.go:69] Setting ingress-dns=true in profile "addons-813566"
	I1004 02:48:53.742365 1155571 addons.go:234] Setting addon ingress-dns=true in "addons-813566"
	I1004 02:48:53.674904 1155571 addons.go:69] Setting inspektor-gadget=true in profile "addons-813566"
	I1004 02:48:53.745965 1155571 addons.go:234] Setting addon inspektor-gadget=true in "addons-813566"
	I1004 02:48:53.746050 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.674908 1155571 addons.go:69] Setting logviewer=true in profile "addons-813566"
	I1004 02:48:53.746615 1155571 addons.go:234] Setting addon logviewer=true in "addons-813566"
	I1004 02:48:53.746683 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.747223 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.757495 1155571 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1004 02:48:53.759141 1155571 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1004 02:48:53.759217 1155571 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1004 02:48:53.759318 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:53.674911 1155571 addons.go:69] Setting metrics-server=true in profile "addons-813566"
	I1004 02:48:53.760957 1155571 addons.go:234] Setting addon metrics-server=true in "addons-813566"
	I1004 02:48:53.761075 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.675075 1155571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:48:53.773854 1155571 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1004 02:48:53.775779 1155571 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:48:53.775884 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1004 02:48:53.776064 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:53.791358 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.792216 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.804830 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.812329 1155571 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1004 02:48:53.824434 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1004 02:48:53.826316 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1004 02:48:53.835088 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1004 02:48:53.836911 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1004 02:48:53.856839 1155571 out.go:177]   - Using image docker.io/registry:2.8.3
	I1004 02:48:53.858684 1155571 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1004 02:48:53.860614 1155571 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1004 02:48:53.860669 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1004 02:48:53.860846 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:53.865652 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.876610 1155571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:48:53.878962 1155571 addons.go:234] Setting addon default-storageclass=true in "addons-813566"
	I1004 02:48:53.879015 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.879497 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.912129 1155571 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-813566"
	I1004 02:48:53.912259 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.913036 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:48:53.928564 1155571 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1004 02:48:53.928596 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1004 02:48:53.928682 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:53.962694 1155571 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:48:53.968898 1155571 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:48:53.968923 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:48:53.969044 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:53.984350 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:48:53.996001 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1004 02:48:53.999955 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1004 02:48:54.003198 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1004 02:48:54.015235 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1004 02:48:54.015649 1155571 out.go:177]   - Using image docker.io/ivans3/minikube-log-viewer:v1
	I1004 02:48:54.015808 1155571 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1004 02:48:54.025950 1155571 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1004 02:48:54.032745 1155571 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1004 02:48:54.033078 1155571 addons.go:431] installing /etc/kubernetes/addons/logviewer-dp-and-svc.yaml
	I1004 02:48:54.033095 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/logviewer-dp-and-svc.yaml (2016 bytes)
	I1004 02:48:54.033167 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.049032 1155571 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1004 02:48:54.049061 1155571 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1004 02:48:54.049131 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.066032 1155571 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1004 02:48:54.067739 1155571 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:48:54.067968 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1004 02:48:54.067996 1155571 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1004 02:48:54.068071 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.096575 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.097287 1155571 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1004 02:48:54.101826 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1004 02:48:54.101855 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1004 02:48:54.101931 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.107176 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.109364 1155571 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1004 02:48:54.116503 1155571 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1004 02:48:54.116530 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1004 02:48:54.116607 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.117077 1155571 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:48:54.117091 1155571 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:48:54.117149 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.133587 1155571 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:48:54.135536 1155571 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:48:54.135562 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1004 02:48:54.135635 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.195827 1155571 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1004 02:48:54.196554 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.196729 1155571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 02:48:54.198208 1155571 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:48:54.198225 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1004 02:48:54.198280 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.203749 1155571 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1004 02:48:54.211142 1155571 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:48:54.211222 1155571 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:48:54.211329 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.243006 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.284496 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.285353 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.292835 1155571 out.go:177]   - Using image docker.io/busybox:stable
	I1004 02:48:54.294870 1155571 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1004 02:48:54.296665 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.297153 1155571 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:48:54.297169 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1004 02:48:54.297226 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:48:54.305503 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.310346 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.332505 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.356392 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.364469 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.365495 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	W1004 02:48:54.367295 1155571 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1004 02:48:54.367322 1155571 retry.go:31] will retry after 290.38631ms: ssh: handshake failed: EOF
	I1004 02:48:54.386295 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.401113 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:48:54.874291 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1004 02:48:55.038105 1155571 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1004 02:48:55.038192 1155571 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1004 02:48:55.038448 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1004 02:48:55.038488 1155571 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1004 02:48:55.038737 1155571 addons.go:431] installing /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:48:55.038777 1155571 ssh_runner.go:362] scp logviewer/logviewer-rbac.yaml --> /etc/kubernetes/addons/logviewer-rbac.yaml (1064 bytes)
	I1004 02:48:55.054039 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 02:48:55.065600 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1004 02:48:55.065626 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1004 02:48:55.118974 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:48:55.161243 1155571 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1004 02:48:55.161273 1155571 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1004 02:48:55.186001 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1004 02:48:55.206993 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 02:48:55.362313 1155571 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:48:55.362348 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1004 02:48:55.365045 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:48:55.412723 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 02:48:55.441242 1155571 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1004 02:48:55.441314 1155571 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1004 02:48:55.478909 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1004 02:48:55.486582 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1004 02:48:55.486652 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1004 02:48:55.489917 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1004 02:48:55.489982 1155571 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1004 02:48:55.509036 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml
	I1004 02:48:55.535138 1155571 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1004 02:48:55.535213 1155571 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1004 02:48:55.574771 1155571 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:48:55.574842 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1004 02:48:55.751373 1155571 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1004 02:48:55.751454 1155571 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1004 02:48:55.758449 1155571 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:48:55.758514 1155571 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:48:55.785493 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1004 02:48:55.785574 1155571 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1004 02:48:55.793018 1155571 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1004 02:48:55.793091 1155571 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1004 02:48:55.834745 1155571 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.958099432s)
	I1004 02:48:55.834864 1155571 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1004 02:48:55.834834 1155571 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.638082502s)
	I1004 02:48:55.835823 1155571 node_ready.go:35] waiting up to 6m0s for node "addons-813566" to be "Ready" ...
	I1004 02:48:55.839195 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1004 02:48:55.839213 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1004 02:48:55.842054 1155571 node_ready.go:49] node "addons-813566" has status "Ready":"True"
	I1004 02:48:55.842075 1155571 node_ready.go:38] duration metric: took 6.229666ms for node "addons-813566" to be "Ready" ...
	I1004 02:48:55.842084 1155571 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:48:55.855262 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1004 02:48:55.856653 1155571 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace to be "Ready" ...
	I1004 02:48:56.019638 1155571 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1004 02:48:56.019718 1155571 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1004 02:48:56.055499 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1004 02:48:56.055577 1155571 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1004 02:48:56.069711 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1004 02:48:56.069791 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1004 02:48:56.096546 1155571 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:48:56.096624 1155571 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:48:56.160410 1155571 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:48:56.160486 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1004 02:48:56.298872 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1004 02:48:56.298948 1155571 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1004 02:48:56.342107 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1004 02:48:56.342183 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1004 02:48:56.344351 1155571 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-813566" context rescaled to 1 replicas
	I1004 02:48:56.359130 1155571 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1004 02:48:56.359205 1155571 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1004 02:48:56.365025 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:48:56.524328 1155571 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:48:56.524394 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1004 02:48:56.589067 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1004 02:48:56.604576 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.730208029s)
	I1004 02:48:56.698390 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1004 02:48:56.698465 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1004 02:48:56.699179 1155571 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1004 02:48:56.699227 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1004 02:48:56.772466 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:48:56.994132 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1004 02:48:56.994205 1155571 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1004 02:48:57.015573 1155571 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1004 02:48:57.015642 1155571 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1004 02:48:57.234442 1155571 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:48:57.234519 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1004 02:48:57.346157 1155571 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1004 02:48:57.346233 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1004 02:48:57.356355 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 02:48:57.595763 1155571 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1004 02:48:57.595843 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1004 02:48:57.798749 1155571 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:48:57.798816 1155571 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1004 02:48:57.863582 1155571 pod_ready.go:103] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"False"
	I1004 02:48:58.059789 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 02:48:59.271397 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.217311841s)
	I1004 02:48:59.271628 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.152619763s)
	I1004 02:48:59.271663 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.085639633s)
	I1004 02:48:59.271718 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.064701942s)
	I1004 02:48:59.271751 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.906685657s)
	W1004 02:48:59.284707 1155571 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1004 02:48:59.865538 1155571 pod_ready.go:103] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:01.241112 1155571 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1004 02:49:01.241232 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:49:01.270810 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:49:01.977704 1155571 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1004 02:49:02.159117 1155571 addons.go:234] Setting addon gcp-auth=true in "addons-813566"
	I1004 02:49:02.159186 1155571 host.go:66] Checking if "addons-813566" exists ...
	I1004 02:49:02.159749 1155571 cli_runner.go:164] Run: docker container inspect addons-813566 --format={{.State.Status}}
	I1004 02:49:02.188586 1155571 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1004 02:49:02.188648 1155571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-813566
	I1004 02:49:02.224498 1155571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/addons-813566/id_rsa Username:docker}
	I1004 02:49:02.364197 1155571 pod_ready.go:103] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:02.776164 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.363361147s)
	I1004 02:49:02.776208 1155571 addons.go:475] Verifying addon ingress=true in "addons-813566"
	I1004 02:49:02.778274 1155571 out.go:177] * Verifying ingress addon...
	I1004 02:49:02.781647 1155571 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1004 02:49:02.785938 1155571 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1004 02:49:02.785966 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:03.287374 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:03.834877 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.287123 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.400593 1155571 pod_ready.go:103] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:04.813668 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.33466693s)
	I1004 02:49:04.813786 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/logviewer-dp-and-svc.yaml -f /etc/kubernetes/addons/logviewer-rbac.yaml: (9.304682645s)
	I1004 02:49:04.813834 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.958510593s)
	I1004 02:49:04.813917 1155571 addons.go:475] Verifying addon registry=true in "addons-813566"
	I1004 02:49:04.813960 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.224811506s)
	I1004 02:49:04.813917 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.448810717s)
	I1004 02:49:04.814222 1155571 addons.go:475] Verifying addon metrics-server=true in "addons-813566"
	I1004 02:49:04.814319 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.041770761s)
	W1004 02:49:04.814342 1155571 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:04.814357 1155571 retry.go:31] will retry after 365.640799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 02:49:04.814428 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.458004583s)
	I1004 02:49:04.818633 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:04.818668 1155571 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-813566 service yakd-dashboard -n yakd-dashboard
	
	I1004 02:49:04.818774 1155571 out.go:177] * Verifying registry addon...
	I1004 02:49:04.822215 1155571 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1004 02:49:04.948020 1155571 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 02:49:04.948051 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:05.180162 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 02:49:05.298928 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:05.341958 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:05.511889 1155571 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.323265471s)
	I1004 02:49:05.513672 1155571 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1004 02:49:05.515393 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.455491931s)
	I1004 02:49:05.515438 1155571 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-813566"
	I1004 02:49:05.517229 1155571 out.go:177] * Verifying csi-hostpath-driver addon...
	I1004 02:49:05.517294 1155571 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1004 02:49:05.519652 1155571 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1004 02:49:05.519878 1155571 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1004 02:49:05.519901 1155571 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1004 02:49:05.560937 1155571 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 02:49:05.560973 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:05.581064 1155571 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1004 02:49:05.581090 1155571 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1004 02:49:05.620601 1155571 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:05.620690 1155571 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1004 02:49:05.675887 1155571 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 02:49:05.791467 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:05.827406 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.029074 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:06.287320 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:06.326356 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:06.526175 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:06.580770 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.40055099s)
	I1004 02:49:06.773661 1155571 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.097728992s)
	I1004 02:49:06.776765 1155571 addons.go:475] Verifying addon gcp-auth=true in "addons-813566"
	I1004 02:49:06.779960 1155571 out.go:177] * Verifying gcp-auth addon...
	I1004 02:49:06.782467 1155571 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1004 02:49:06.804731 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:06.821615 1155571 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 02:49:06.873670 1155571 pod_ready.go:103] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:06.902416 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:07.025140 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:07.289203 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:07.388520 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:07.525630 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:07.790000 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:07.889334 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:08.024985 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:08.288963 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:08.326697 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:08.533889 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:08.789073 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:08.826829 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:09.025909 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:09.288614 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:09.326835 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:09.364785 1155571 pod_ready.go:103] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"False"
	I1004 02:49:09.525664 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:09.788796 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:09.888559 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:10.026982 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:10.289348 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:10.386333 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:10.525773 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:10.786811 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:10.827289 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.025279 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:11.299746 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:11.365441 1155571 pod_ready.go:93] pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:11.365516 1155571 pod_ready.go:82] duration metric: took 15.508803571s for pod "coredns-7c65d6cfc9-7r2nt" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.365542 1155571 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jl9hw" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.368102 1155571 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jl9hw" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jl9hw" not found
	I1004 02:49:11.368169 1155571 pod_ready.go:82] duration metric: took 2.605278ms for pod "coredns-7c65d6cfc9-jl9hw" in "kube-system" namespace to be "Ready" ...
	E1004 02:49:11.368216 1155571 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jl9hw" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jl9hw" not found
	I1004 02:49:11.368241 1155571 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.374818 1155571 pod_ready.go:93] pod "etcd-addons-813566" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:11.374883 1155571 pod_ready.go:82] duration metric: took 6.618259ms for pod "etcd-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.374914 1155571 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.381352 1155571 pod_ready.go:93] pod "kube-apiserver-addons-813566" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:11.381427 1155571 pod_ready.go:82] duration metric: took 6.491843ms for pod "kube-apiserver-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.381455 1155571 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.386814 1155571 pod_ready.go:93] pod "kube-controller-manager-addons-813566" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:11.386879 1155571 pod_ready.go:82] duration metric: took 5.399976ms for pod "kube-controller-manager-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.386905 1155571 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mtcgx" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.388116 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.524729 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:11.567935 1155571 pod_ready.go:93] pod "kube-proxy-mtcgx" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:11.568013 1155571 pod_ready.go:82] duration metric: took 181.086282ms for pod "kube-proxy-mtcgx" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.568040 1155571 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.787893 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:11.827747 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:11.966086 1155571 pod_ready.go:93] pod "kube-scheduler-addons-813566" in "kube-system" namespace has status "Ready":"True"
	I1004 02:49:11.966182 1155571 pod_ready.go:82] duration metric: took 398.119284ms for pod "kube-scheduler-addons-813566" in "kube-system" namespace to be "Ready" ...
	I1004 02:49:11.966210 1155571 pod_ready.go:39] duration metric: took 16.124109325s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:49:11.966246 1155571 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:49:11.966327 1155571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:49:11.992584 1155571 api_server.go:72] duration metric: took 18.362030779s to wait for apiserver process to appear ...
	I1004 02:49:11.992659 1155571 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:49:11.992705 1155571 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1004 02:49:12.018429 1155571 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1004 02:49:12.019732 1155571 api_server.go:141] control plane version: v1.31.1
	I1004 02:49:12.019819 1155571 api_server.go:131] duration metric: took 27.128301ms to wait for apiserver health ...
	I1004 02:49:12.019856 1155571 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:49:12.024923 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:12.172435 1155571 system_pods.go:59] 19 kube-system pods found
	I1004 02:49:12.172527 1155571 system_pods.go:61] "coredns-7c65d6cfc9-7r2nt" [cf115824-1f8f-4658-8edc-9374adaba879] Running
	I1004 02:49:12.172556 1155571 system_pods.go:61] "csi-hostpath-attacher-0" [8456fecd-406d-494a-9c0e-80040a8a1aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 02:49:12.172583 1155571 system_pods.go:61] "csi-hostpath-resizer-0" [6f9236f9-2523-4020-ab74-0337b6f88e55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1004 02:49:12.172605 1155571 system_pods.go:61] "csi-hostpathplugin-j6ngh" [c1b6ab22-8a29-4b17-abbc-7026502ffa69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 02:49:12.172629 1155571 system_pods.go:61] "etcd-addons-813566" [de97b29d-5325-4ba2-ba1c-2e78973e29d0] Running
	I1004 02:49:12.172662 1155571 system_pods.go:61] "kindnet-9bqh8" [0f8a625d-6a9f-49c0-bb4c-ae9714f89b92] Running
	I1004 02:49:12.172681 1155571 system_pods.go:61] "kube-apiserver-addons-813566" [e41d6aa2-f064-4b70-8c69-57bbb53e8587] Running
	I1004 02:49:12.172699 1155571 system_pods.go:61] "kube-controller-manager-addons-813566" [2d3222d7-5e0e-4330-919b-639136147006] Running
	I1004 02:49:12.172721 1155571 system_pods.go:61] "kube-ingress-dns-minikube" [5534ca20-38ae-49ea-b27c-dafc26b2ce32] Running
	I1004 02:49:12.172742 1155571 system_pods.go:61] "kube-proxy-mtcgx" [952305f8-46e0-4fb6-af59-fc89c5da381a] Running
	I1004 02:49:12.172769 1155571 system_pods.go:61] "kube-scheduler-addons-813566" [858f3747-ce2a-4aaf-baf6-28e2483ea936] Running
	I1004 02:49:12.172795 1155571 system_pods.go:61] "logviewer-7c79c8bcc9-wks5q" [089aa2eb-91a9-4e35-af2b-4913fed9b821] Pending / Ready:ContainersNotReady (containers with unready status: [logviewer]) / ContainersReady:ContainersNotReady (containers with unready status: [logviewer])
	I1004 02:49:12.172819 1155571 system_pods.go:61] "metrics-server-84c5f94fbc-p2qpr" [f3315201-b874-46dd-b7f9-914da49dbb43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:49:12.172843 1155571 system_pods.go:61] "nvidia-device-plugin-daemonset-xqrwz" [8cc8b509-3784-424e-9f4b-54bf488b54d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1004 02:49:12.172866 1155571 system_pods.go:61] "registry-66c9cd494c-tx7r8" [08e6728e-1615-49cf-90c5-7adb914e944a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 02:49:12.172900 1155571 system_pods.go:61] "registry-proxy-gssnr" [fd499071-9287-4378-8b68-836755ad3000] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 02:49:12.172930 1155571 system_pods.go:61] "snapshot-controller-56fcc65765-d89ww" [959cc762-3c79-4718-a34a-82d6509a0b97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:12.172955 1155571 system_pods.go:61] "snapshot-controller-56fcc65765-sslhd" [2181497a-9c73-46f8-a67e-e1ec5f242df0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:12.172975 1155571 system_pods.go:61] "storage-provisioner" [2c4d678a-bf78-4c4e-b117-0b8c4d759029] Running
	I1004 02:49:12.173007 1155571 system_pods.go:74] duration metric: took 153.129743ms to wait for pod list to return data ...
	I1004 02:49:12.173031 1155571 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:49:12.289599 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:12.327432 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:12.381020 1155571 default_sa.go:45] found service account: "default"
	I1004 02:49:12.381099 1155571 default_sa.go:55] duration metric: took 208.046995ms for default service account to be created ...
	I1004 02:49:12.381125 1155571 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:49:12.527713 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:12.636177 1155571 system_pods.go:86] 19 kube-system pods found
	I1004 02:49:12.636256 1155571 system_pods.go:89] "coredns-7c65d6cfc9-7r2nt" [cf115824-1f8f-4658-8edc-9374adaba879] Running
	I1004 02:49:12.636294 1155571 system_pods.go:89] "csi-hostpath-attacher-0" [8456fecd-406d-494a-9c0e-80040a8a1aa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 02:49:12.636336 1155571 system_pods.go:89] "csi-hostpath-resizer-0" [6f9236f9-2523-4020-ab74-0337b6f88e55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1004 02:49:12.636363 1155571 system_pods.go:89] "csi-hostpathplugin-j6ngh" [c1b6ab22-8a29-4b17-abbc-7026502ffa69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 02:49:12.636384 1155571 system_pods.go:89] "etcd-addons-813566" [de97b29d-5325-4ba2-ba1c-2e78973e29d0] Running
	I1004 02:49:12.636408 1155571 system_pods.go:89] "kindnet-9bqh8" [0f8a625d-6a9f-49c0-bb4c-ae9714f89b92] Running
	I1004 02:49:12.636431 1155571 system_pods.go:89] "kube-apiserver-addons-813566" [e41d6aa2-f064-4b70-8c69-57bbb53e8587] Running
	I1004 02:49:12.636453 1155571 system_pods.go:89] "kube-controller-manager-addons-813566" [2d3222d7-5e0e-4330-919b-639136147006] Running
	I1004 02:49:12.636474 1155571 system_pods.go:89] "kube-ingress-dns-minikube" [5534ca20-38ae-49ea-b27c-dafc26b2ce32] Running
	I1004 02:49:12.636504 1155571 system_pods.go:89] "kube-proxy-mtcgx" [952305f8-46e0-4fb6-af59-fc89c5da381a] Running
	I1004 02:49:12.636524 1155571 system_pods.go:89] "kube-scheduler-addons-813566" [858f3747-ce2a-4aaf-baf6-28e2483ea936] Running
	I1004 02:49:12.636544 1155571 system_pods.go:89] "logviewer-7c79c8bcc9-wks5q" [089aa2eb-91a9-4e35-af2b-4913fed9b821] Pending / Ready:ContainersNotReady (containers with unready status: [logviewer]) / ContainersReady:ContainersNotReady (containers with unready status: [logviewer])
	I1004 02:49:12.636565 1155571 system_pods.go:89] "metrics-server-84c5f94fbc-p2qpr" [f3315201-b874-46dd-b7f9-914da49dbb43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:49:12.636588 1155571 system_pods.go:89] "nvidia-device-plugin-daemonset-xqrwz" [8cc8b509-3784-424e-9f4b-54bf488b54d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1004 02:49:12.636619 1155571 system_pods.go:89] "registry-66c9cd494c-tx7r8" [08e6728e-1615-49cf-90c5-7adb914e944a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 02:49:12.636641 1155571 system_pods.go:89] "registry-proxy-gssnr" [fd499071-9287-4378-8b68-836755ad3000] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 02:49:12.636662 1155571 system_pods.go:89] "snapshot-controller-56fcc65765-d89ww" [959cc762-3c79-4718-a34a-82d6509a0b97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:12.636685 1155571 system_pods.go:89] "snapshot-controller-56fcc65765-sslhd" [2181497a-9c73-46f8-a67e-e1ec5f242df0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 02:49:12.636721 1155571 system_pods.go:89] "storage-provisioner" [2c4d678a-bf78-4c4e-b117-0b8c4d759029] Running
	I1004 02:49:12.636743 1155571 system_pods.go:126] duration metric: took 255.598447ms to wait for k8s-apps to be running ...
	I1004 02:49:12.636766 1155571 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:49:12.636837 1155571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:49:12.652129 1155571 system_svc.go:56] duration metric: took 15.34657ms WaitForService to wait for kubelet
	I1004 02:49:12.652154 1155571 kubeadm.go:582] duration metric: took 19.02160926s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:49:12.652174 1155571 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:49:12.761169 1155571 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1004 02:49:12.761207 1155571 node_conditions.go:123] node cpu capacity is 2
	I1004 02:49:12.761222 1155571 node_conditions.go:105] duration metric: took 109.042713ms to run NodePressure ...
	I1004 02:49:12.761235 1155571 start.go:241] waiting for startup goroutines ...
	I1004 02:49:12.786730 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:12.826493 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:13.025612 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:13.287277 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:13.327597 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:13.526125 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:13.788042 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:13.888211 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:14.025220 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:14.287066 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:14.326019 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:14.525103 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:14.786583 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:14.826472 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:15.033396 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:15.286891 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:15.326537 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:15.524662 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:15.787257 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:15.825773 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:16.024378 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:16.300234 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:16.325987 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:16.525212 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:16.786820 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:16.826403 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:17.028897 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:17.287370 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:17.328812 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:17.525085 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:17.785954 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:17.826028 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:18.026435 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:18.288659 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:18.388557 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:18.524664 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:18.789906 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:18.827324 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:19.035666 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:19.289651 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:19.363260 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:19.528541 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:19.790484 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:19.830067 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:20.033633 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:20.289597 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:20.326992 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:20.529248 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:20.791339 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:20.826197 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:21.025794 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:21.288409 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:21.326459 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:21.525425 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:21.787193 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:21.886100 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:22.025363 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:22.288607 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:22.326652 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:22.525076 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:22.787434 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:22.825939 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:23.024496 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:23.287847 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:23.326287 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:23.525061 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:23.796877 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:23.826365 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:24.025783 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:24.293411 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:24.390805 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:24.524783 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:24.785641 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:24.826503 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:25.024822 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:25.286233 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:25.325787 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:25.525254 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:25.787813 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:25.826792 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:26.024663 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:26.285855 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:26.325986 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:26.525021 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:26.788340 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:26.825865 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:27.035301 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:27.287025 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:27.326878 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 02:49:27.525356 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:27.790516 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:27.827634 1155571 kapi.go:107] duration metric: took 23.005366693s to wait for kubernetes.io/minikube-addons=registry ...
	I1004 02:49:28.025171 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:28.287236 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:28.525760 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:28.787478 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:29.024482 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:29.287406 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:29.525561 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:29.787889 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:30.033570 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:30.285862 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:30.524610 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:30.787453 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:31.025261 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:31.287895 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:31.525556 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:31.788212 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:32.025216 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:32.288078 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:32.524703 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:32.788059 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:33.025549 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:33.287311 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:33.524389 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:33.792637 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:34.024940 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:34.287151 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:34.525151 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:34.787106 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:35.025261 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:35.286928 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:35.524737 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:35.787655 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:36.024861 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:36.286737 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:36.525056 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:36.788185 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:37.035501 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:37.287445 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:37.524795 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:37.788962 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:38.034039 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:38.290428 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:38.524317 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:38.787364 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:39.024629 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:39.288534 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:39.526449 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:39.795185 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:40.039163 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:40.287069 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:40.524912 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:40.788887 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:41.038541 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.287444 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:41.572570 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:41.788728 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:42.027290 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:42.290202 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:42.524555 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:42.788135 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.024673 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:43.295748 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:43.528105 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:43.787462 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.024639 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.286575 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:44.524879 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:44.787784 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.040019 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.289209 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:45.524728 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:45.787314 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.026113 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.287567 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:46.525872 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:46.788052 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.024881 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.287845 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:47.524725 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:47.786429 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.026126 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.287275 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:48.524860 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:48.785817 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.025162 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.287471 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:49.525107 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:49.787327 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.026415 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.287414 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:50.525611 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:50.787586 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.026461 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.287224 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:51.525149 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:51.786890 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.025140 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.289004 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:52.524863 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:52.787044 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.025192 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.288398 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:53.524342 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:53.787214 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.027802 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.292023 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:54.525489 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:54.786971 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.026546 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.286797 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:55.525200 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 02:49:55.786618 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.025072 1155571 kapi.go:107] duration metric: took 50.505412757s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1004 02:49:56.286120 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:56.786360 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.287570 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:57.786810 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.290128 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:58.787966 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.291079 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:49:59.787168 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.309175 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:00.788996 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.286262 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:01.790812 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.287284 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:02.787109 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.286224 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:03.787188 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.287081 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:04.787166 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.286263 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:05.785700 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.286963 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:06.787359 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.286352 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:07.787707 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.288599 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:08.789531 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.285585 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:09.787624 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.288028 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:10.787201 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.287980 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:11.803612 1155571 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 02:50:12.286963 1155571 kapi.go:107] duration metric: took 1m9.505312994s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1004 02:50:30.286288 1155571 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 02:50:30.286318 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:30.786184 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.286328 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:31.785976 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.286699 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:32.786720 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.286378 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:33.786186 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.286942 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:34.786737 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.286526 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:35.785915 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.286700 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:36.786782 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.286759 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:37.787091 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.286202 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:38.786515 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.286654 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:39.786532 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.286793 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:40.786535 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.286572 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:41.786450 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.287532 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:42.785504 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.286479 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:43.785518 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.286429 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:44.786193 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.287144 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:45.786469 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.286285 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:46.786122 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.286098 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:47.786149 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.286674 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:48.786361 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.286856 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:49.785819 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:50.286750 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:50.786949 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:51.285816 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:51.786774 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:52.286783 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:52.786330 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:53.285837 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:53.786303 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:54.286424 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:54.785433 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:55.285969 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:55.786340 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:56.285927 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:56.785855 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:57.286969 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:57.786178 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:58.286078 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:58.785519 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:59.286365 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:50:59.786505 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:00.294773 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:00.786599 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:01.286059 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:01.785987 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:02.286251 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:02.786654 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:03.286563 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:03.786497 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:04.286724 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:04.786170 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:05.285763 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:05.786561 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:06.288468 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:06.786007 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:07.285907 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:07.785734 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:08.286202 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:08.786121 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:09.286294 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:09.788731 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:10.287575 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:10.786952 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:11.285932 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:11.786519 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:12.287393 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:12.787274 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:13.287212 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:13.785678 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:14.286435 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:14.786164 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:15.286451 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:15.786360 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:16.285551 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:16.786977 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:17.286564 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:17.785665 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:18.286723 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:18.786596 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:19.286103 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:19.785459 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:20.285870 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:20.787035 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:21.285643 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:21.786709 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:22.286847 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:22.785944 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:23.285989 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:23.786109 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:24.287112 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:24.785923 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:25.286562 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:25.785980 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:26.285665 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:26.786598 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:27.286122 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:27.785401 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:28.286405 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:28.785919 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:29.285584 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:29.786209 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:30.285849 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:30.786657 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:31.286252 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:31.786752 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:32.288767 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:32.786297 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:33.286418 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:33.785953 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:34.286444 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:34.786596 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:35.286693 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:35.786940 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:36.285506 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:36.787104 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:37.286391 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:37.786585 1155571 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 02:51:38.286514 1155571 kapi.go:107] duration metric: took 2m31.504045267s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1004 02:51:38.288268 1155571 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-813566 cluster.
	I1004 02:51:38.290635 1155571 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1004 02:51:38.292487 1155571 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1004 02:51:38.294376 1155571 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, storage-provisioner-rancher, volcano, logviewer, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1004 02:51:38.296070 1155571 addons.go:510] duration metric: took 2m44.665209374s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns storage-provisioner-rancher volcano logviewer metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1004 02:51:38.296113 1155571 start.go:246] waiting for cluster config update ...
	I1004 02:51:38.296134 1155571 start.go:255] writing updated cluster config ...
	I1004 02:51:38.296454 1155571 ssh_runner.go:195] Run: rm -f paused
	I1004 02:51:38.644726 1155571 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1004 02:51:38.646604 1155571 out.go:177] * Done! kubectl is now configured to use "addons-813566" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b7de99daf410b       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   6474f59f34ea2       gcp-auth-89d5ffd79-w6rmz
	45596e0e04949       1a9605c872c1d       4 minutes ago       Running             admission                                0                   eb7f57165a81d       volcano-admission-5874dfdd79-6h88w
	f3e3abf269abe       289a818c8d9c5       4 minutes ago       Running             controller                               0                   22e6018f41521       ingress-nginx-controller-bc57996ff-c7qk6
	6a367a2097102       420193b27261a       4 minutes ago       Exited              patch                                    2                   a99672548e929       ingress-nginx-admission-patch-bnz25
	401daefa6d293       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   487f537729a59       csi-hostpathplugin-j6ngh
	edff4a08e2c2a       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   487f537729a59       csi-hostpathplugin-j6ngh
	35f8b821a465d       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   487f537729a59       csi-hostpathplugin-j6ngh
	31cb3424fcf8d       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   487f537729a59       csi-hostpathplugin-j6ngh
	7eee3ea85caa3       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   487f537729a59       csi-hostpathplugin-j6ngh
	fe691c00d00aa       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   487f537729a59       csi-hostpathplugin-j6ngh
	e9ea7ef089d56       6aa88c604f2b4       5 minutes ago       Running             volcano-scheduler                        0                   d3fe00c3943ac       volcano-scheduler-6c9778cbdf-djjtz
	50f67a06782ff       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   ba1849b87515c       csi-hostpath-resizer-0
	fba26ac56671f       420193b27261a       5 minutes ago       Exited              create                                   0                   ac1f8500eb40c       ingress-nginx-admission-create-jzj7v
	484d00675938e       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   c3957ff9a99ff       volcano-controllers-789ffc5785-4smn5
	41d43d5e04b33       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   77594cb55ff15       csi-hostpath-attacher-0
	83422087c8b20       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   a3c2c6bc5ba98       snapshot-controller-56fcc65765-sslhd
	91a8b5d7d2444       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   bf9becea53756       local-path-provisioner-86d989889c-q8hvj
	d15b33690d2d5       44ee3981ac37c       5 minutes ago       Running             logviewer                                0                   03f72749985d3       logviewer-7c79c8bcc9-wks5q
	67f096d065b8c       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   b061503c650ad       snapshot-controller-56fcc65765-d89ww
	3acd2475e2e6a       77bdba588b953       5 minutes ago       Running             yakd                                     0                   8054eef09f679       yakd-dashboard-67d98fc6b-smhk2
	91793343f063d       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   7731d63e28381       registry-proxy-gssnr
	3a44e5960e759       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   1933ad6e33d11       nvidia-device-plugin-daemonset-xqrwz
	e832ce9654fa6       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   db9ecc3ab0b20       metrics-server-84c5f94fbc-p2qpr
	d816975e402bc       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   46678e57b47f3       cloud-spanner-emulator-5b584cc74-7hznl
	7979ffcea5a52       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   c6971e1f69fa2       registry-66c9cd494c-tx7r8
	4706b07670295       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   ab6f840876552       gadget-4vhfk
	91e49f2e07fa4       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   1d2ee069a54d8       coredns-7c65d6cfc9-7r2nt
	020fb30823a7f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   e259249ff4c14       kube-ingress-dns-minikube
	44b74b80821b1       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   e021183e35bde       storage-provisioner
	e3fcb43961703       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   a8ffbbb4a00c9       kindnet-9bqh8
	47d3492e3150c       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   76b063bc63dfe       kube-proxy-mtcgx
	cc5c52c5a22b1       27e3830e14027       6 minutes ago       Running             etcd                                     0                   01c03935f14ef       etcd-addons-813566
	4bde940a36bb7       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   ae879c6a6ba71       kube-apiserver-addons-813566
	a9e6196f1811f       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   bc0f7af398d6b       kube-controller-manager-addons-813566
	0d49116fbb65b       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   4419a63125d2a       kube-scheduler-addons-813566
	
	
	==> containerd <==
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.963728242Z" level=info msg="TearDown network for sandbox \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\" successfully"
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.963765124Z" level=info msg="StopPodSandbox for \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\" returns successfully"
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.964348234Z" level=info msg="RemovePodSandbox for \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\""
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.964388587Z" level=info msg="Forcibly stopping sandbox \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\""
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.972034295Z" level=info msg="TearDown network for sandbox \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\" successfully"
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.978791623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.978911589Z" level=info msg="RemovePodSandbox \"956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e\" returns successfully"
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.979485871Z" level=info msg="StopPodSandbox for \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\""
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.987500464Z" level=info msg="TearDown network for sandbox \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\" successfully"
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.987544336Z" level=info msg="StopPodSandbox for \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\" returns successfully"
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.988132370Z" level=info msg="RemovePodSandbox for \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\""
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.988173034Z" level=info msg="Forcibly stopping sandbox \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\""
	Oct 04 02:51:48 addons-813566 containerd[814]: time="2024-10-04T02:51:48.995272719Z" level=info msg="TearDown network for sandbox \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\" successfully"
	Oct 04 02:51:49 addons-813566 containerd[814]: time="2024-10-04T02:51:49.001660564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 04 02:51:49 addons-813566 containerd[814]: time="2024-10-04T02:51:49.001838705Z" level=info msg="RemovePodSandbox \"00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a\" returns successfully"
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.008545637Z" level=info msg="RemoveContainer for \"581cbd2b128f034f5f7b198f171541e6bdf488686fcd47a29824f3e138f93cd1\""
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.031911019Z" level=info msg="RemoveContainer for \"581cbd2b128f034f5f7b198f171541e6bdf488686fcd47a29824f3e138f93cd1\" returns successfully"
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.037284339Z" level=info msg="StopPodSandbox for \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\""
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.045412572Z" level=info msg="TearDown network for sandbox \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\" successfully"
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.045452728Z" level=info msg="StopPodSandbox for \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\" returns successfully"
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.045961262Z" level=info msg="RemovePodSandbox for \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\""
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.046008482Z" level=info msg="Forcibly stopping sandbox \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\""
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.053405241Z" level=info msg="TearDown network for sandbox \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\" successfully"
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.060260447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 04 02:52:49 addons-813566 containerd[814]: time="2024-10-04T02:52:49.060635854Z" level=info msg="RemovePodSandbox \"4a9047d9215520831d7eb1ba3ad6be6e384504f82c77397376588084f51fd003\" returns successfully"
	
	
	==> coredns [91e49f2e07fa44f4948b96a2ada4103d3dbd0fa957ab8f385c4e0f7a882ec6ea] <==
	[INFO] 10.244.0.7:34269 - 60607 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112229s
	[INFO] 10.244.0.7:34269 - 41575 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003217253s
	[INFO] 10.244.0.7:34269 - 8225 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00275502s
	[INFO] 10.244.0.7:34269 - 47101 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000146846s
	[INFO] 10.244.0.7:34269 - 3428 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000115946s
	[INFO] 10.244.0.7:51953 - 40312 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158793s
	[INFO] 10.244.0.7:51953 - 40566 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000062046s
	[INFO] 10.244.0.7:58948 - 7717 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068422s
	[INFO] 10.244.0.7:58948 - 8170 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067512s
	[INFO] 10.244.0.7:49331 - 42011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070072s
	[INFO] 10.244.0.7:49331 - 42274 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009801s
	[INFO] 10.244.0.7:40177 - 41763 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00196958s
	[INFO] 10.244.0.7:40177 - 42208 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001423598s
	[INFO] 10.244.0.7:33501 - 63530 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00006834s
	[INFO] 10.244.0.7:33501 - 63942 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005467s
	[INFO] 10.244.0.25:56307 - 17126 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185419s
	[INFO] 10.244.0.25:39412 - 29822 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000279662s
	[INFO] 10.244.0.25:37768 - 44738 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013339s
	[INFO] 10.244.0.25:50160 - 344 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000188216s
	[INFO] 10.244.0.25:42956 - 44303 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116463s
	[INFO] 10.244.0.25:40753 - 64228 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000263334s
	[INFO] 10.244.0.25:47742 - 53482 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002603981s
	[INFO] 10.244.0.25:36966 - 33681 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002176833s
	[INFO] 10.244.0.25:38281 - 23407 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002061486s
	[INFO] 10.244.0.25:41628 - 8838 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.005402529s
	
	
	==> describe nodes <==
	Name:               addons-813566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-813566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=addons-813566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T02_48_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-813566
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-813566"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 02:48:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-813566
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 02:54:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 02:51:53 +0000   Fri, 04 Oct 2024 02:48:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 02:51:53 +0000   Fri, 04 Oct 2024 02:48:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 02:51:53 +0000   Fri, 04 Oct 2024 02:48:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 02:51:53 +0000   Fri, 04 Oct 2024 02:48:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-813566
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4463fb50b8d464cb311345dd3aecf21
	  System UUID:                814919db-c17c-416d-91b0-a8a94d0d80b6
	  Boot ID:                    c9bb91eb-f5c3-4f81-9b8d-aca1ad72b7b9
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-7hznl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-4vhfk                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-w6rmz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-c7qk6    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-7r2nt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-j6ngh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-813566                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-9bqh8                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-813566                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-813566       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-mtcgx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-813566                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 logviewer-7c79c8bcc9-wks5q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 metrics-server-84c5f94fbc-p2qpr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m58s
	  kube-system                 nvidia-device-plugin-daemonset-xqrwz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 registry-66c9cd494c-tx7r8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-gssnr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 snapshot-controller-56fcc65765-d89ww        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-sslhd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  local-path-storage          local-path-provisioner-86d989889c-q8hvj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-5874dfdd79-6h88w          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-789ffc5785-4smn5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-6c9778cbdf-djjtz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-smhk2              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m16s)  kubelet          Node addons-813566 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m16s)  kubelet          Node addons-813566 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m16s)  kubelet          Node addons-813566 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-813566 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-813566 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-813566 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-813566 event: Registered Node addons-813566 in Controller
	
	
	==> dmesg <==
	[Oct 4 00:55] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.058710] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.291304] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[Oct 4 02:19] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.194222] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.045203] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [cc5c52c5a22b139a9af15f8dae3cc7548f58c824549e9538d2a02461eb67cfbd] <==
	{"level":"info","ts":"2024-10-04T02:48:42.950204Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T02:48:42.950342Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T02:48:42.950480Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-04T02:48:42.951023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-10-04T02:48:42.956434Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-10-04T02:48:43.193997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-04T02:48:43.194047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-04T02:48:43.194098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-04T02:48:43.194124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-04T02:48:43.194179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-04T02:48:43.194200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-04T02:48:43.194209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-04T02:48:43.196819Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T02:48:43.197260Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-813566 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T02:48:43.197396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T02:48:43.197738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T02:48:43.197898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T02:48:43.198005Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T02:48:43.198192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T02:48:43.198313Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T02:48:43.197418Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T02:48:43.204520Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T02:48:43.205845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T02:48:43.245077Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T02:48:43.246266Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [b7de99daf410b36d806258d5dfdc9f8c87076fbe87c332299845ddb15f67618c] <==
	2024/10/04 02:51:37 GCP Auth Webhook started!
	2024/10/04 02:51:55 Ready to marshal response ...
	2024/10/04 02:51:55 Ready to write response ...
	2024/10/04 02:51:55 Ready to marshal response ...
	2024/10/04 02:51:55 Ready to write response ...
	
	
	==> kernel <==
	 02:54:57 up  6:37,  0 users,  load average: 1.03, 1.52, 2.17
	Linux addons-813566 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e3fcb439617032bc9bb2c4170840d3d0ef1ac05d4508eea55dabae3ddc478f62] <==
	I1004 02:52:56.001595       1 main.go:299] handling current node
	I1004 02:53:06.008634       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:53:06.008675       1 main.go:299] handling current node
	I1004 02:53:16.001090       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:53:16.001338       1 main.go:299] handling current node
	I1004 02:53:26.001098       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:53:26.001147       1 main.go:299] handling current node
	I1004 02:53:36.010194       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:53:36.010339       1 main.go:299] handling current node
	I1004 02:53:46.008657       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:53:46.008765       1 main.go:299] handling current node
	I1004 02:53:56.001630       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:53:56.001669       1 main.go:299] handling current node
	I1004 02:54:06.009854       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:54:06.009908       1 main.go:299] handling current node
	I1004 02:54:16.008749       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:54:16.008786       1 main.go:299] handling current node
	I1004 02:54:26.008516       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:54:26.008763       1 main.go:299] handling current node
	I1004 02:54:36.009156       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:54:36.009202       1 main.go:299] handling current node
	I1004 02:54:46.006051       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:54:46.006106       1 main.go:299] handling current node
	I1004 02:54:56.001108       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1004 02:54:56.001391       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4bde940a36bb7c075a09dff9aa89b795ed0d660f64f8431de749287ea8219945] <==
	W1004 02:50:07.760193       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:08.822805       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:09.739460       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.66.212:443: connect: connection refused
	E1004 02:50:09.739499       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.66.212:443: connect: connection refused" logger="UnhandledError"
	W1004 02:50:09.741204       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:09.835260       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.66.212:443: connect: connection refused
	E1004 02:50:09.835296       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.66.212:443: connect: connection refused" logger="UnhandledError"
	W1004 02:50:09.836924       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:09.897884       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:10.989085       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:12.024148       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:13.051213       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:14.086322       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:15.184965       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:16.263122       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:17.284830       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:18.306223       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.198.149:443: connect: connection refused
	W1004 02:50:29.796929       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.66.212:443: connect: connection refused
	E1004 02:50:29.797016       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.66.212:443: connect: connection refused" logger="UnhandledError"
	W1004 02:51:09.750670       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.66.212:443: connect: connection refused
	E1004 02:51:09.750721       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.66.212:443: connect: connection refused" logger="UnhandledError"
	W1004 02:51:09.842963       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.66.212:443: connect: connection refused
	E1004 02:51:09.842999       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.66.212:443: connect: connection refused" logger="UnhandledError"
	I1004 02:51:55.196751       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1004 02:51:55.246559       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [a9e6196f1811f8afb6d6194c3f059c57e3b7b0d8c0b669273b90b323d64b68a2] <==
	I1004 02:51:09.771335       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:09.773064       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:09.791849       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:09.850385       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:09.859880       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:09.866008       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:09.874486       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:10.928971       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:10.946646       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:12.061588       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:12.087052       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:13.068628       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:13.077726       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:13.084146       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1004 02:51:13.093612       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:13.101490       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:13.107594       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1004 02:51:38.031701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="9.623297ms"
	I1004 02:51:38.032191       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="67.175µs"
	I1004 02:51:43.034149       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1004 02:51:43.036648       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1004 02:51:43.086916       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1004 02:51:43.088133       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1004 02:51:53.260092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-813566"
	I1004 02:51:54.893718       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [47d3492e3150c624514959369de161fc5843c355d59221fb3fb39947e2780418] <==
	I1004 02:48:55.308639       1 server_linux.go:66] "Using iptables proxy"
	I1004 02:48:55.394158       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1004 02:48:55.394249       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 02:48:55.443119       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1004 02:48:55.443211       1 server_linux.go:169] "Using iptables Proxier"
	I1004 02:48:55.445749       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 02:48:55.446274       1 server.go:483] "Version info" version="v1.31.1"
	I1004 02:48:55.446304       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:48:55.447811       1 config.go:199] "Starting service config controller"
	I1004 02:48:55.447840       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 02:48:55.447867       1 config.go:105] "Starting endpoint slice config controller"
	I1004 02:48:55.447872       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 02:48:55.448430       1 config.go:328] "Starting node config controller"
	I1004 02:48:55.448439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 02:48:55.548096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1004 02:48:55.548102       1 shared_informer.go:320] Caches are synced for service config
	I1004 02:48:55.548546       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0d49116fbb65bbab519cec7c29ba5ad24af0073b6dc83c36ac9477955c97b6fe] <==
	W1004 02:48:46.664907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 02:48:46.671515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:46.664943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:48:46.671736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:46.664975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:48:46.671971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:46.665008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:48:46.672161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:46.665044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 02:48:46.672410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:46.665097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 02:48:46.672601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:46.665219       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 02:48:46.672793       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 02:48:46.665273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:48:46.672983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.514981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:48:47.515262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.528123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:48:47.528388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1004 02:48:47.717753       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 02:48:47.718018       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1004 02:48:47.757209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:48:47.757255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1004 02:48:50.255408       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 04 02:51:10 addons-813566 kubelet[1490]: E1004 02:51:10.687435    1490 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod5a2eda25-ab86-42ce-9e0d-43019f787c0a/fe073aa5dbd1ff7d1935ffa32894c8b7bd329ee68c5ab4030b9de5aa644e0cef\": RecentStats: unable to find data in memory cache]"
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.144382    1490 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mj4r\" (UniqueName: \"kubernetes.io/projected/5a2eda25-ab86-42ce-9e0d-43019f787c0a-kube-api-access-9mj4r\") pod \"5a2eda25-ab86-42ce-9e0d-43019f787c0a\" (UID: \"5a2eda25-ab86-42ce-9e0d-43019f787c0a\") "
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.152517    1490 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2eda25-ab86-42ce-9e0d-43019f787c0a-kube-api-access-9mj4r" (OuterVolumeSpecName: "kube-api-access-9mj4r") pod "5a2eda25-ab86-42ce-9e0d-43019f787c0a" (UID: "5a2eda25-ab86-42ce-9e0d-43019f787c0a"). InnerVolumeSpecName "kube-api-access-9mj4r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.246085    1490 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq5wh\" (UniqueName: \"kubernetes.io/projected/fd89b4fa-6219-4058-89f6-befe40102098-kube-api-access-rq5wh\") pod \"fd89b4fa-6219-4058-89f6-befe40102098\" (UID: \"fd89b4fa-6219-4058-89f6-befe40102098\") "
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.246249    1490 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9mj4r\" (UniqueName: \"kubernetes.io/projected/5a2eda25-ab86-42ce-9e0d-43019f787c0a-kube-api-access-9mj4r\") on node \"addons-813566\" DevicePath \"\""
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.248755    1490 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd89b4fa-6219-4058-89f6-befe40102098-kube-api-access-rq5wh" (OuterVolumeSpecName: "kube-api-access-rq5wh") pod "fd89b4fa-6219-4058-89f6-befe40102098" (UID: "fd89b4fa-6219-4058-89f6-befe40102098"). InnerVolumeSpecName "kube-api-access-rq5wh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.347439    1490 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rq5wh\" (UniqueName: \"kubernetes.io/projected/fd89b4fa-6219-4058-89f6-befe40102098-kube-api-access-rq5wh\") on node \"addons-813566\" DevicePath \"\""
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.923826    1490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956bd0182b819b2a811ffade11fbd3980eccaebd703df76d23b3d6f96cbbcc1e"
	Oct 04 02:51:12 addons-813566 kubelet[1490]: I1004 02:51:12.926127    1490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00df96263b2a249f1211acd47f82cb401313ec1ce6f9ccf5faaf9fc16ec29f8a"
	Oct 04 02:51:38 addons-813566 kubelet[1490]: I1004 02:51:38.019952    1490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-w6rmz" podStartSLOduration=65.891697292 podStartE2EDuration="1m9.019925772s" podCreationTimestamp="2024-10-04 02:50:29 +0000 UTC" firstStartedPulling="2024-10-04 02:51:34.143797933 +0000 UTC m=+165.359855100" lastFinishedPulling="2024-10-04 02:51:37.272026413 +0000 UTC m=+168.488083580" observedRunningTime="2024-10-04 02:51:38.017397532 +0000 UTC m=+169.233454698" watchObservedRunningTime="2024-10-04 02:51:38.019925772 +0000 UTC m=+169.235982938"
	Oct 04 02:51:44 addons-813566 kubelet[1490]: I1004 02:51:44.937100    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2eda25-ab86-42ce-9e0d-43019f787c0a" path="/var/lib/kubelet/pods/5a2eda25-ab86-42ce-9e0d-43019f787c0a/volumes"
	Oct 04 02:51:44 addons-813566 kubelet[1490]: I1004 02:51:44.937500    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd89b4fa-6219-4058-89f6-befe40102098" path="/var/lib/kubelet/pods/fd89b4fa-6219-4058-89f6-befe40102098/volumes"
	Oct 04 02:51:48 addons-813566 kubelet[1490]: I1004 02:51:48.926075    1490 scope.go:117] "RemoveContainer" containerID="b98a93b65d0583b847f64c02dc32044c9078cff925df15b5e8861a56ad9a4004"
	Oct 04 02:51:48 addons-813566 kubelet[1490]: I1004 02:51:48.933923    1490 scope.go:117] "RemoveContainer" containerID="fe073aa5dbd1ff7d1935ffa32894c8b7bd329ee68c5ab4030b9de5aa644e0cef"
	Oct 04 02:51:53 addons-813566 kubelet[1490]: I1004 02:51:53.934227    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-tx7r8" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:51:54 addons-813566 kubelet[1490]: I1004 02:51:54.938355    1490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7bcde4-f74d-44b9-9622-57acbc67ff44" path="/var/lib/kubelet/pods/9c7bcde4-f74d-44b9-9622-57acbc67ff44/volumes"
	Oct 04 02:52:00 addons-813566 kubelet[1490]: I1004 02:52:00.934437    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gssnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:52:05 addons-813566 kubelet[1490]: I1004 02:52:05.934127    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xqrwz" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:52:49 addons-813566 kubelet[1490]: I1004 02:52:49.006837    1490 scope.go:117] "RemoveContainer" containerID="581cbd2b128f034f5f7b198f171541e6bdf488686fcd47a29824f3e138f93cd1"
	Oct 04 02:53:10 addons-813566 kubelet[1490]: I1004 02:53:10.934366    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-tx7r8" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:53:15 addons-813566 kubelet[1490]: I1004 02:53:15.936175    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gssnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:53:22 addons-813566 kubelet[1490]: I1004 02:53:22.934235    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xqrwz" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:54:25 addons-813566 kubelet[1490]: I1004 02:54:25.933785    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gssnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:54:33 addons-813566 kubelet[1490]: I1004 02:54:33.933454    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-tx7r8" secret="" err="secret \"gcp-auth\" not found"
	Oct 04 02:54:38 addons-813566 kubelet[1490]: I1004 02:54:38.936036    1490 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xqrwz" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [44b74b80821b182d1a4de41d96dcfc3d4895baae764835261e4b874540653cdb] <==
	I1004 02:48:59.339635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:48:59.373761       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:48:59.373822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:48:59.396631       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:48:59.397510       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-813566_f732dcc2-f60a-4da8-8f62-5d90a175a21a!
	I1004 02:48:59.407668       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"513b931f-3bbf-4c38-b3bf-7382deaaf60e", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-813566_f732dcc2-f60a-4da8-8f62-5d90a175a21a became leader
	I1004 02:48:59.498254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-813566_f732dcc2-f60a-4da8-8f62-5d90a175a21a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-813566 -n addons-813566
helpers_test.go:261: (dbg) Run:  kubectl --context addons-813566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-jzj7v ingress-nginx-admission-patch-bnz25 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-813566 describe pod ingress-nginx-admission-create-jzj7v ingress-nginx-admission-patch-bnz25 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-813566 describe pod ingress-nginx-admission-create-jzj7v ingress-nginx-admission-patch-bnz25 test-job-nginx-0: exit status 1 (84.669028ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jzj7v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bnz25" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-813566 describe pod ingress-nginx-admission-create-jzj7v ingress-nginx-admission-patch-bnz25 test-job-nginx-0: exit status 1
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable volcano --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable volcano --alsologtostderr -v=1: (11.381256037s)
--- FAIL: TestAddons/serial/Volcano (211.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-445570 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1004 03:39:46.461201 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-445570 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m8.878031427s)

                                                
                                                
-- stdout --
	* [old-k8s-version-445570] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-445570" primary control-plane node in "old-k8s-version-445570" cluster
	* Pulling base image v0.0.45-1727731891-master ...
	* Restarting existing docker container for "old-k8s-version-445570" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-445570 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:39:39.059875 1364533 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:39:39.060145 1364533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:39:39.060201 1364533 out.go:358] Setting ErrFile to fd 2...
	I1004 03:39:39.060221 1364533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:39:39.060600 1364533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:39:39.061097 1364533 out.go:352] Setting JSON to false
	I1004 03:39:39.062826 1364533 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26527,"bootTime":1727986652,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 03:39:39.062944 1364533 start.go:139] virtualization:  
	I1004 03:39:39.066781 1364533 out.go:177] * [old-k8s-version-445570] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:39:39.068865 1364533 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:39:39.068966 1364533 notify.go:220] Checking for updates...
	I1004 03:39:39.071907 1364533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:39:39.073999 1364533 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:39:39.075878 1364533 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 03:39:39.078366 1364533 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:39:39.080417 1364533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:39:39.082896 1364533 config.go:182] Loaded profile config "old-k8s-version-445570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1004 03:39:39.085291 1364533 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1004 03:39:39.087146 1364533 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:39:39.137410 1364533 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:39:39.137543 1364533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:39:39.226936 1364533 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-04 03:39:39.211232422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:39:39.227088 1364533 docker.go:318] overlay module found
	I1004 03:39:39.229593 1364533 out.go:177] * Using the docker driver based on existing profile
	I1004 03:39:39.231556 1364533 start.go:297] selected driver: docker
	I1004 03:39:39.231574 1364533 start.go:901] validating driver "docker" against &{Name:old-k8s-version-445570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-445570 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:39:39.231687 1364533 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:39:39.232485 1364533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:39:39.314752 1364533 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-04 03:39:39.30114615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:39:39.315257 1364533 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:39:39.315286 1364533 cni.go:84] Creating CNI manager for ""
	I1004 03:39:39.315360 1364533 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 03:39:39.315441 1364533 start.go:340] cluster config:
	{Name:old-k8s-version-445570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-445570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:39:39.317531 1364533 out.go:177] * Starting "old-k8s-version-445570" primary control-plane node in "old-k8s-version-445570" cluster
	I1004 03:39:39.319168 1364533 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1004 03:39:39.320942 1364533 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:39:39.325154 1364533 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1004 03:39:39.325213 1364533 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1004 03:39:39.325224 1364533 cache.go:56] Caching tarball of preloaded images
	I1004 03:39:39.325303 1364533 preload.go:172] Found /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1004 03:39:39.325312 1364533 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1004 03:39:39.325431 1364533 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/config.json ...
	I1004 03:39:39.325661 1364533 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:39:39.356504 1364533 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:39:39.356523 1364533 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:39:39.356537 1364533 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:39:39.356564 1364533 start.go:360] acquireMachinesLock for old-k8s-version-445570: {Name:mkdd704622089ee663daae5187f69a0613c5aa4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:39:39.356615 1364533 start.go:364] duration metric: took 33.148µs to acquireMachinesLock for "old-k8s-version-445570"
	I1004 03:39:39.356635 1364533 start.go:96] Skipping create...Using existing machine configuration
	I1004 03:39:39.356640 1364533 fix.go:54] fixHost starting: 
	I1004 03:39:39.356898 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	I1004 03:39:39.378912 1364533 fix.go:112] recreateIfNeeded on old-k8s-version-445570: state=Stopped err=<nil>
	W1004 03:39:39.378946 1364533 fix.go:138] unexpected machine state, will restart: <nil>
	I1004 03:39:39.381302 1364533 out.go:177] * Restarting existing docker container for "old-k8s-version-445570" ...
	I1004 03:39:39.383202 1364533 cli_runner.go:164] Run: docker start old-k8s-version-445570
	I1004 03:39:39.756138 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	I1004 03:39:39.798468 1364533 kic.go:430] container "old-k8s-version-445570" state is running.
	I1004 03:39:39.798994 1364533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-445570
	I1004 03:39:39.824952 1364533 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/config.json ...
	I1004 03:39:39.825193 1364533 machine.go:93] provisionDockerMachine start ...
	I1004 03:39:39.825252 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:39.848502 1364533 main.go:141] libmachine: Using SSH client type: native
	I1004 03:39:39.848913 1364533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1004 03:39:39.848929 1364533 main.go:141] libmachine: About to run SSH command:
	hostname
	I1004 03:39:39.849725 1364533 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59636->127.0.0.1:34547: read: connection reset by peer
	I1004 03:39:42.992408 1364533 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-445570
	
	I1004 03:39:42.992480 1364533 ubuntu.go:169] provisioning hostname "old-k8s-version-445570"
	I1004 03:39:42.992593 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:43.022009 1364533 main.go:141] libmachine: Using SSH client type: native
	I1004 03:39:43.022321 1364533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1004 03:39:43.022334 1364533 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-445570 && echo "old-k8s-version-445570" | sudo tee /etc/hostname
	I1004 03:39:43.193388 1364533 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-445570
	
	I1004 03:39:43.193481 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:43.215372 1364533 main.go:141] libmachine: Using SSH client type: native
	I1004 03:39:43.215629 1364533 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1004 03:39:43.215655 1364533 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-445570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-445570/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-445570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 03:39:43.352718 1364533 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 03:39:43.352749 1364533 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19546-1149434/.minikube CaCertPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19546-1149434/.minikube}
	I1004 03:39:43.352794 1364533 ubuntu.go:177] setting up certificates
	I1004 03:39:43.352813 1364533 provision.go:84] configureAuth start
	I1004 03:39:43.352895 1364533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-445570
	I1004 03:39:43.372168 1364533 provision.go:143] copyHostCerts
	I1004 03:39:43.372236 1364533 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.pem, removing ...
	I1004 03:39:43.372258 1364533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.pem
	I1004 03:39:43.372352 1364533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.pem (1078 bytes)
	I1004 03:39:43.372464 1364533 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-1149434/.minikube/cert.pem, removing ...
	I1004 03:39:43.372475 1364533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-1149434/.minikube/cert.pem
	I1004 03:39:43.372506 1364533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19546-1149434/.minikube/cert.pem (1123 bytes)
	I1004 03:39:43.372578 1364533 exec_runner.go:144] found /home/jenkins/minikube-integration/19546-1149434/.minikube/key.pem, removing ...
	I1004 03:39:43.372590 1364533 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19546-1149434/.minikube/key.pem
	I1004 03:39:43.372616 1364533 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19546-1149434/.minikube/key.pem (1679 bytes)
	I1004 03:39:43.372676 1364533 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-445570 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-445570]
	I1004 03:39:44.255796 1364533 provision.go:177] copyRemoteCerts
	I1004 03:39:44.255897 1364533 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 03:39:44.255950 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:44.274513 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:44.376749 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 03:39:44.405454 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 03:39:44.431203 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 03:39:44.457164 1364533 provision.go:87] duration metric: took 1.104336663s to configureAuth
	I1004 03:39:44.457189 1364533 ubuntu.go:193] setting minikube options for container-runtime
	I1004 03:39:44.457404 1364533 config.go:182] Loaded profile config "old-k8s-version-445570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1004 03:39:44.457412 1364533 machine.go:96] duration metric: took 4.632210919s to provisionDockerMachine
	I1004 03:39:44.457420 1364533 start.go:293] postStartSetup for "old-k8s-version-445570" (driver="docker")
	I1004 03:39:44.457467 1364533 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 03:39:44.457515 1364533 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 03:39:44.457561 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:44.476967 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:44.574005 1364533 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 03:39:44.577521 1364533 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1004 03:39:44.577559 1364533 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1004 03:39:44.577577 1364533 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1004 03:39:44.577586 1364533 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1004 03:39:44.577596 1364533 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-1149434/.minikube/addons for local assets ...
	I1004 03:39:44.577656 1364533 filesync.go:126] Scanning /home/jenkins/minikube-integration/19546-1149434/.minikube/files for local assets ...
	I1004 03:39:44.577746 1364533 filesync.go:149] local asset: /home/jenkins/minikube-integration/19546-1149434/.minikube/files/etc/ssl/certs/11548132.pem -> 11548132.pem in /etc/ssl/certs
	I1004 03:39:44.577858 1364533 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 03:39:44.586763 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/files/etc/ssl/certs/11548132.pem --> /etc/ssl/certs/11548132.pem (1708 bytes)
	I1004 03:39:44.613287 1364533 start.go:296] duration metric: took 155.851037ms for postStartSetup
	I1004 03:39:44.613394 1364533 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:39:44.613457 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:44.630936 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:44.725761 1364533 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1004 03:39:44.730917 1364533 fix.go:56] duration metric: took 5.374268178s for fixHost
	I1004 03:39:44.730943 1364533 start.go:83] releasing machines lock for "old-k8s-version-445570", held for 5.374320272s
	I1004 03:39:44.731013 1364533 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-445570
	I1004 03:39:44.758053 1364533 ssh_runner.go:195] Run: cat /version.json
	I1004 03:39:44.758135 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:44.758427 1364533 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 03:39:44.758483 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:44.794066 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:44.808970 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:44.892034 1364533 ssh_runner.go:195] Run: systemctl --version
	I1004 03:39:45.064754 1364533 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 03:39:45.071238 1364533 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1004 03:39:45.111215 1364533 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1004 03:39:45.111325 1364533 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 03:39:45.128659 1364533 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 03:39:45.128750 1364533 start.go:495] detecting cgroup driver to use...
	I1004 03:39:45.128821 1364533 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1004 03:39:45.128914 1364533 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1004 03:39:45.151235 1364533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1004 03:39:45.184040 1364533 docker.go:217] disabling cri-docker service (if available) ...
	I1004 03:39:45.184138 1364533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 03:39:45.201870 1364533 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 03:39:45.217441 1364533 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 03:39:45.386558 1364533 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 03:39:45.511002 1364533 docker.go:233] disabling docker service ...
	I1004 03:39:45.511098 1364533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 03:39:45.530559 1364533 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 03:39:45.545691 1364533 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 03:39:45.660120 1364533 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 03:39:45.770124 1364533 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 03:39:45.783352 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 03:39:45.801815 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1004 03:39:45.811911 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1004 03:39:45.821716 1364533 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1004 03:39:45.821799 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1004 03:39:45.831560 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1004 03:39:45.841311 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1004 03:39:45.850849 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1004 03:39:45.860341 1364533 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 03:39:45.869289 1364533 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1004 03:39:45.878908 1364533 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 03:39:45.887771 1364533 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 03:39:45.897063 1364533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:39:46.001490 1364533 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1004 03:39:46.203882 1364533 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1004 03:39:46.203972 1364533 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1004 03:39:46.213619 1364533 start.go:563] Will wait 60s for crictl version
	I1004 03:39:46.213714 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:39:46.217374 1364533 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 03:39:46.276650 1364533 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1004 03:39:46.276728 1364533 ssh_runner.go:195] Run: containerd --version
	I1004 03:39:46.304299 1364533 ssh_runner.go:195] Run: containerd --version
	I1004 03:39:46.328314 1364533 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1004 03:39:46.330104 1364533 cli_runner.go:164] Run: docker network inspect old-k8s-version-445570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:39:46.344439 1364533 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1004 03:39:46.348311 1364533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:39:46.359659 1364533 kubeadm.go:883] updating cluster {Name:old-k8s-version-445570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-445570 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1004 03:39:46.359790 1364533 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1004 03:39:46.359856 1364533 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:39:46.409386 1364533 containerd.go:627] all images are preloaded for containerd runtime.
	I1004 03:39:46.409410 1364533 containerd.go:534] Images already preloaded, skipping extraction
	I1004 03:39:46.409474 1364533 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 03:39:46.455757 1364533 containerd.go:627] all images are preloaded for containerd runtime.
	I1004 03:39:46.455780 1364533 cache_images.go:84] Images are preloaded, skipping loading
	I1004 03:39:46.455787 1364533 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1004 03:39:46.455894 1364533 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-445570 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-445570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1004 03:39:46.455961 1364533 ssh_runner.go:195] Run: sudo crictl info
	I1004 03:39:46.506603 1364533 cni.go:84] Creating CNI manager for ""
	I1004 03:39:46.506628 1364533 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 03:39:46.506654 1364533 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1004 03:39:46.506705 1364533 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-445570 NodeName:old-k8s-version-445570 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 03:39:46.506852 1364533 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-445570"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 03:39:46.506929 1364533 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1004 03:39:46.521788 1364533 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 03:39:46.521912 1364533 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 03:39:46.534729 1364533 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1004 03:39:46.554922 1364533 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 03:39:46.575343 1364533 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1004 03:39:46.595798 1364533 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1004 03:39:46.599238 1364533 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 03:39:46.610170 1364533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:39:46.703369 1364533 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:39:46.724817 1364533 certs.go:68] Setting up /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570 for IP: 192.168.76.2
	I1004 03:39:46.724840 1364533 certs.go:194] generating shared ca certs ...
	I1004 03:39:46.724856 1364533 certs.go:226] acquiring lock for ca certs: {Name:mkbb55aef12d0dc8daa9e4b13628be072878b5e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:39:46.725037 1364533 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.key
	I1004 03:39:46.725096 1364533 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.key
	I1004 03:39:46.725108 1364533 certs.go:256] generating profile certs ...
	I1004 03:39:46.725199 1364533 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.key
	I1004 03:39:46.725284 1364533 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/apiserver.key.37c5016b
	I1004 03:39:46.725341 1364533 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/proxy-client.key
	I1004 03:39:46.725471 1364533 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/1154813.pem (1338 bytes)
	W1004 03:39:46.725515 1364533 certs.go:480] ignoring /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/1154813_empty.pem, impossibly tiny 0 bytes
	I1004 03:39:46.725528 1364533 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca-key.pem (1675 bytes)
	I1004 03:39:46.725557 1364533 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem (1078 bytes)
	I1004 03:39:46.725588 1364533 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem (1123 bytes)
	I1004 03:39:46.725612 1364533 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/key.pem (1679 bytes)
	I1004 03:39:46.725659 1364533 certs.go:484] found cert: /home/jenkins/minikube-integration/19546-1149434/.minikube/files/etc/ssl/certs/11548132.pem (1708 bytes)
	I1004 03:39:46.726354 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 03:39:46.751426 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 03:39:46.775153 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 03:39:46.800706 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 03:39:46.825837 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1004 03:39:46.849707 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 03:39:46.873203 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 03:39:46.902706 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 03:39:46.928995 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/1154813.pem --> /usr/share/ca-certificates/1154813.pem (1338 bytes)
	I1004 03:39:46.957252 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/files/etc/ssl/certs/11548132.pem --> /usr/share/ca-certificates/11548132.pem (1708 bytes)
	I1004 03:39:46.989120 1364533 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 03:39:47.018695 1364533 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 03:39:47.056586 1364533 ssh_runner.go:195] Run: openssl version
	I1004 03:39:47.065628 1364533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11548132.pem && ln -fs /usr/share/ca-certificates/11548132.pem /etc/ssl/certs/11548132.pem"
	I1004 03:39:47.076994 1364533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11548132.pem
	I1004 03:39:47.082753 1364533 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 02:59 /usr/share/ca-certificates/11548132.pem
	I1004 03:39:47.082870 1364533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11548132.pem
	I1004 03:39:47.090077 1364533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11548132.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 03:39:47.102977 1364533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 03:39:47.113652 1364533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:39:47.117748 1364533 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:39:47.117863 1364533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 03:39:47.125262 1364533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 03:39:47.140154 1364533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1154813.pem && ln -fs /usr/share/ca-certificates/1154813.pem /etc/ssl/certs/1154813.pem"
	I1004 03:39:47.153756 1364533 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1154813.pem
	I1004 03:39:47.158005 1364533 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 02:59 /usr/share/ca-certificates/1154813.pem
	I1004 03:39:47.158142 1364533 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1154813.pem
	I1004 03:39:47.166553 1364533 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1154813.pem /etc/ssl/certs/51391683.0"
	I1004 03:39:47.179092 1364533 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1004 03:39:47.183338 1364533 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 03:39:47.190565 1364533 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 03:39:47.200658 1364533 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 03:39:47.207827 1364533 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 03:39:47.215004 1364533 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 03:39:47.223436 1364533 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 03:39:47.232729 1364533 kubeadm.go:392] StartCluster: {Name:old-k8s-version-445570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-445570 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:39:47.232885 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1004 03:39:47.232974 1364533 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 03:39:47.295168 1364533 cri.go:89] found id: "5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160"
	I1004 03:39:47.295230 1364533 cri.go:89] found id: "9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b"
	I1004 03:39:47.295250 1364533 cri.go:89] found id: "b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77"
	I1004 03:39:47.295270 1364533 cri.go:89] found id: "db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321"
	I1004 03:39:47.295290 1364533 cri.go:89] found id: "f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287"
	I1004 03:39:47.295319 1364533 cri.go:89] found id: "1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23"
	I1004 03:39:47.295343 1364533 cri.go:89] found id: "2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f"
	I1004 03:39:47.295362 1364533 cri.go:89] found id: "cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec"
	I1004 03:39:47.295381 1364533 cri.go:89] found id: ""
	I1004 03:39:47.295458 1364533 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1004 03:39:47.312390 1364533 cri.go:116] JSON = null
	W1004 03:39:47.312506 1364533 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1004 03:39:47.312599 1364533 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 03:39:47.325143 1364533 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1004 03:39:47.325204 1364533 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1004 03:39:47.325283 1364533 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 03:39:47.336035 1364533 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 03:39:47.336551 1364533 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-445570" does not appear in /home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:39:47.336714 1364533 kubeconfig.go:62] /home/jenkins/minikube-integration/19546-1149434/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-445570" cluster setting kubeconfig missing "old-k8s-version-445570" context setting]
	I1004 03:39:47.337017 1364533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/kubeconfig: {Name:mkbb0a06a5c0d16e5af194939942d8ac82543668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:39:47.338472 1364533 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 03:39:47.354807 1364533 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1004 03:39:47.354881 1364533 kubeadm.go:597] duration metric: took 29.656449ms to restartPrimaryControlPlane
	I1004 03:39:47.354905 1364533 kubeadm.go:394] duration metric: took 122.185841ms to StartCluster
	I1004 03:39:47.354948 1364533 settings.go:142] acquiring lock: {Name:mk1a349894ce66bafe43f883e774857dde6892e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:39:47.355027 1364533 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:39:47.356205 1364533 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/kubeconfig: {Name:mkbb0a06a5c0d16e5af194939942d8ac82543668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:39:47.356773 1364533 config.go:182] Loaded profile config "old-k8s-version-445570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1004 03:39:47.356842 1364533 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1004 03:39:47.356886 1364533 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1004 03:39:47.357245 1364533 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-445570"
	I1004 03:39:47.357262 1364533 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-445570"
	I1004 03:39:47.357261 1364533 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-445570"
	I1004 03:39:47.357273 1364533 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-445570"
	I1004 03:39:47.357281 1364533 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-445570"
	I1004 03:39:47.357284 1364533 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-445570"
	I1004 03:39:47.357292 1364533 addons.go:69] Setting dashboard=true in profile "old-k8s-version-445570"
	I1004 03:39:47.357303 1364533 addons.go:234] Setting addon dashboard=true in "old-k8s-version-445570"
	W1004 03:39:47.357308 1364533 addons.go:243] addon dashboard should already be in state true
	I1004 03:39:47.357332 1364533 host.go:66] Checking if "old-k8s-version-445570" exists ...
	I1004 03:39:47.357654 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	I1004 03:39:47.357907 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	W1004 03:39:47.357285 1364533 addons.go:243] addon metrics-server should already be in state true
	I1004 03:39:47.358429 1364533 host.go:66] Checking if "old-k8s-version-445570" exists ...
	W1004 03:39:47.357268 1364533 addons.go:243] addon storage-provisioner should already be in state true
	I1004 03:39:47.358574 1364533 host.go:66] Checking if "old-k8s-version-445570" exists ...
	I1004 03:39:47.358926 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	I1004 03:39:47.359054 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	I1004 03:39:47.359425 1364533 out.go:177] * Verifying Kubernetes components...
	I1004 03:39:47.361350 1364533 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 03:39:47.420430 1364533 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1004 03:39:47.421999 1364533 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1004 03:39:47.423778 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1004 03:39:47.423806 1364533 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1004 03:39:47.423881 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:47.428821 1364533 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-445570"
	W1004 03:39:47.428842 1364533 addons.go:243] addon default-storageclass should already be in state true
	I1004 03:39:47.428869 1364533 host.go:66] Checking if "old-k8s-version-445570" exists ...
	I1004 03:39:47.429439 1364533 cli_runner.go:164] Run: docker container inspect old-k8s-version-445570 --format={{.State.Status}}
	I1004 03:39:47.432787 1364533 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 03:39:47.432985 1364533 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 03:39:47.436596 1364533 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:39:47.436618 1364533 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 03:39:47.436683 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:47.439598 1364533 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 03:39:47.439633 1364533 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 03:39:47.439705 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:47.504923 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:47.508484 1364533 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 03:39:47.508508 1364533 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 03:39:47.508565 1364533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-445570
	I1004 03:39:47.512114 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:47.522005 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:47.552449 1364533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/old-k8s-version-445570/id_rsa Username:docker}
	I1004 03:39:47.616531 1364533 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1004 03:39:47.675789 1364533 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-445570" to be "Ready" ...
	I1004 03:39:47.714791 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1004 03:39:47.714817 1364533 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1004 03:39:47.755842 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:39:47.790539 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1004 03:39:47.790566 1364533 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1004 03:39:47.815479 1364533 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 03:39:47.815502 1364533 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 03:39:47.875084 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:39:47.902477 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1004 03:39:47.902504 1364533 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1004 03:39:47.912666 1364533 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 03:39:47.912697 1364533 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 03:39:48.040119 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1004 03:39:48.040145 1364533 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1004 03:39:48.054001 1364533 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 03:39:48.054029 1364533 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 03:39:48.142665 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1004 03:39:48.142692 1364533 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1004 03:39:48.168684 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1004 03:39:48.244880 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.244915 1364533 retry.go:31] will retry after 240.98911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.296644 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1004 03:39:48.296673 1364533 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1004 03:39:48.322313 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.322345 1364533 retry.go:31] will retry after 335.580997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.349106 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1004 03:39:48.349133 1364533 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1004 03:39:48.422869 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1004 03:39:48.422896 1364533 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1004 03:39:48.453642 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.453676 1364533 retry.go:31] will retry after 216.846079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.475205 1364533 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1004 03:39:48.475236 1364533 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1004 03:39:48.486547 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:39:48.514479 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1004 03:39:48.658512 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:39:48.671436 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1004 03:39:48.711569 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.711606 1364533 retry.go:31] will retry after 353.636114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:48.723808 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.723840 1364533 retry.go:31] will retry after 147.625795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.872239 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1004 03:39:48.931043 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.931079 1364533 retry.go:31] will retry after 310.235003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:48.931148 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:48.931164 1364533 retry.go:31] will retry after 265.798105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:49.043812 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.043849 1364533 retry.go:31] will retry after 413.199268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.066416 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1004 03:39:49.197021 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.197054 1364533 retry.go:31] will retry after 497.720435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.197148 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 03:39:49.242253 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1004 03:39:49.327890 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.327956 1364533 retry.go:31] will retry after 796.837816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:49.387039 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.387070 1364533 retry.go:31] will retry after 404.504294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.457394 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1004 03:39:49.572060 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.572099 1364533 retry.go:31] will retry after 322.869742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.676728 1364533 node_ready.go:53] error getting node "old-k8s-version-445570": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-445570": dial tcp 192.168.76.2:8443: connect: connection refused
	I1004 03:39:49.694956 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:39:49.792390 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1004 03:39:49.881310 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.881343 1364533 retry.go:31] will retry after 1.036342548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:49.895650 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1004 03:39:50.044822 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.044857 1364533 retry.go:31] will retry after 683.364709ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:50.080201 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.080237 1364533 retry.go:31] will retry after 1.014505504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.125484 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1004 03:39:50.273669 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.273703 1364533 retry.go:31] will retry after 881.924555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.728821 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1004 03:39:50.865207 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.865240 1364533 retry.go:31] will retry after 1.699083404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:50.918485 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1004 03:39:51.062024 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:51.062061 1364533 retry.go:31] will retry after 968.607826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:51.095360 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1004 03:39:51.155919 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1004 03:39:51.259334 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:51.259367 1364533 retry.go:31] will retry after 1.117878799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:51.342101 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:51.342134 1364533 retry.go:31] will retry after 695.22781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.031105 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:39:52.038509 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 03:39:52.177316 1364533 node_ready.go:53] error getting node "old-k8s-version-445570": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-445570": dial tcp 192.168.76.2:8443: connect: connection refused
	W1004 03:39:52.214861 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.214935 1364533 retry.go:31] will retry after 953.249215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1004 03:39:52.239785 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.239862 1364533 retry.go:31] will retry after 1.373813714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.378232 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1004 03:39:52.502696 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.502782 1364533 retry.go:31] will retry after 2.545340942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.564987 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1004 03:39:52.698460 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:52.698555 1364533 retry.go:31] will retry after 1.930421278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:53.168954 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1004 03:39:53.236664 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:53.236700 1364533 retry.go:31] will retry after 2.118719417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:53.613814 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1004 03:39:53.723382 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:53.723414 1364533 retry.go:31] will retry after 3.014146415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:54.629958 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:39:54.676716 1364533 node_ready.go:53] error getting node "old-k8s-version-445570": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-445570": dial tcp 192.168.76.2:8443: connect: connection refused
	W1004 03:39:54.727164 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:54.727199 1364533 retry.go:31] will retry after 3.843521746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:55.048781 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1004 03:39:55.163648 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:55.163678 1364533 retry.go:31] will retry after 2.74755464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:55.355546 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1004 03:39:55.472226 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:55.472256 1364533 retry.go:31] will retry after 3.667529923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1004 03:39:56.738294 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 03:39:57.911896 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1004 03:39:58.570864 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1004 03:39:59.140921 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 03:40:06.677948 1364533 node_ready.go:53] error getting node "old-k8s-version-445570": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-445570": net/http: TLS handshake timeout
	I1004 03:40:06.958679 1364533 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.220290247s)
	W1004 03:40:06.958717 1364533 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1004 03:40:06.958740 1364533 retry.go:31] will retry after 3.466758112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1004 03:40:07.912728 1364533 node_ready.go:49] node "old-k8s-version-445570" has status "Ready":"True"
	I1004 03:40:07.912756 1364533 node_ready.go:38] duration metric: took 20.236935285s for node "old-k8s-version-445570" to be "Ready" ...
	I1004 03:40:07.912767 1364533 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:40:08.054381 1364533 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-d25t5" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.125241 1364533 pod_ready.go:93] pod "coredns-74ff55c5b-d25t5" in "kube-system" namespace has status "Ready":"True"
	I1004 03:40:08.125315 1364533 pod_ready.go:82] duration metric: took 70.845964ms for pod "coredns-74ff55c5b-d25t5" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.125342 1364533 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.185753 1364533 pod_ready.go:93] pod "etcd-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"True"
	I1004 03:40:08.185830 1364533 pod_ready.go:82] duration metric: took 60.466188ms for pod "etcd-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.185869 1364533 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.209642 1364533 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"True"
	I1004 03:40:08.209716 1364533 pod_ready.go:82] duration metric: took 23.812153ms for pod "kube-apiserver-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.209743 1364533 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:40:08.887759 1364533 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.975809709s)
	I1004 03:40:08.888054 1364533 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.317151203s)
	I1004 03:40:08.888160 1364533 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.747161231s)
	I1004 03:40:08.890185 1364533 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-445570 addons enable metrics-server
	
	I1004 03:40:10.217357 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:10.425747 1364533 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 03:40:10.963227 1364533 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-445570"
	I1004 03:40:10.965688 1364533 out.go:177] * Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
	I1004 03:40:10.967507 1364533 addons.go:510] duration metric: took 23.610609774s for enable addons: enabled=[storage-provisioner dashboard default-storageclass metrics-server]
	I1004 03:40:12.724937 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:15.216264 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:17.716877 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:20.216612 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:22.715795 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:24.719830 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:27.216388 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:29.216846 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:31.219313 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:33.222617 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:35.724615 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:37.766272 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:40.217343 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:42.218455 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:44.723731 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:47.216363 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:49.216463 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:51.721233 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:54.216682 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:56.232221 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:40:58.720402 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:00.726120 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:02.726459 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:04.727781 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:07.216819 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:09.217349 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:11.717418 1364533 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:12.727272 1364533 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"True"
	I1004 03:41:12.727311 1364533 pod_ready.go:82] duration metric: took 1m4.517545449s for pod "kube-controller-manager-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:41:12.727324 1364533 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tm9p4" in "kube-system" namespace to be "Ready" ...
	I1004 03:41:12.739093 1364533 pod_ready.go:93] pod "kube-proxy-tm9p4" in "kube-system" namespace has status "Ready":"True"
	I1004 03:41:12.739127 1364533 pod_ready.go:82] duration metric: took 11.796391ms for pod "kube-proxy-tm9p4" in "kube-system" namespace to be "Ready" ...
	I1004 03:41:12.739139 1364533 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:41:14.745697 1364533 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:16.746879 1364533 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:19.246497 1364533 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:21.746943 1364533 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:23.745933 1364533 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace has status "Ready":"True"
	I1004 03:41:23.746002 1364533 pod_ready.go:82] duration metric: took 11.00685361s for pod "kube-scheduler-old-k8s-version-445570" in "kube-system" namespace to be "Ready" ...
	I1004 03:41:23.746028 1364533 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace to be "Ready" ...
	I1004 03:41:25.760468 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:28.253211 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:30.752061 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:33.253997 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:35.751282 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:37.752033 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:39.752107 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:41.752461 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:44.251810 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:46.252579 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:48.305732 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:50.752349 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:52.752710 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:55.251842 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:57.252654 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:41:59.752681 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:02.252825 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:04.752535 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:06.765376 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:09.252639 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:11.252757 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:13.252883 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:15.754357 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:18.251979 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:20.253002 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:22.752121 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:25.251908 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:27.253634 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:29.752947 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:31.753072 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:33.757008 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:36.251857 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:38.253195 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:40.752145 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:42.767145 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:45.253704 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:47.752569 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:50.252476 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:52.253075 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:54.755395 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:57.253138 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:42:59.752424 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:02.252063 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:04.252427 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:06.252761 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:08.752768 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:11.252127 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:13.252265 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:15.252976 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:17.752561 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:20.252703 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:22.252859 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:24.252988 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:26.752747 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:29.252271 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:31.751566 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:33.752193 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:35.752578 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:38.253032 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:40.752408 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:42.752831 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:45.265319 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:47.751410 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:49.751796 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:52.252489 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:54.253225 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:56.253661 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:43:58.752140 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:00.752466 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:03.252749 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:05.752378 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:08.251933 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:10.252596 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:12.752750 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:15.253351 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:17.752410 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:20.252555 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:22.753075 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:25.252435 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:27.757903 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:29.782623 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:32.252342 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:34.252931 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:36.253120 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:38.752078 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:41.253036 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:43.754579 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:46.252663 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:48.253272 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:50.756459 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:53.251768 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:55.252493 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:57.751655 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:44:59.752726 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:02.251688 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:04.252524 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:06.751477 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:09.253098 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:11.253327 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:13.752155 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:15.755304 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:18.252034 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:20.289479 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:22.751457 1364533 pod_ready.go:103] pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace has status "Ready":"False"
	I1004 03:45:23.753004 1364533 pod_ready.go:82] duration metric: took 4m0.006948975s for pod "metrics-server-9975d5f86-7vbmk" in "kube-system" namespace to be "Ready" ...
	E1004 03:45:23.753027 1364533 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 03:45:23.753036 1364533 pod_ready.go:39] duration metric: took 5m15.840258718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 03:45:23.753050 1364533 api_server.go:52] waiting for apiserver process to appear ...
	I1004 03:45:23.753080 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1004 03:45:23.753156 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 03:45:23.809384 1364533 cri.go:89] found id: "7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454"
	I1004 03:45:23.809406 1364533 cri.go:89] found id: "2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f"
	I1004 03:45:23.809411 1364533 cri.go:89] found id: ""
	I1004 03:45:23.809419 1364533 logs.go:282] 2 containers: [7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454 2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f]
	I1004 03:45:23.809479 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:23.813398 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:23.817135 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1004 03:45:23.817234 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 03:45:23.873280 1364533 cri.go:89] found id: "5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08"
	I1004 03:45:23.873300 1364533 cri.go:89] found id: "cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec"
	I1004 03:45:23.873311 1364533 cri.go:89] found id: ""
	I1004 03:45:23.873318 1364533 logs.go:282] 2 containers: [5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08 cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec]
	I1004 03:45:23.873380 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:23.881083 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:23.887553 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1004 03:45:23.887631 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 03:45:23.934505 1364533 cri.go:89] found id: "c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d"
	I1004 03:45:23.934529 1364533 cri.go:89] found id: "5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160"
	I1004 03:45:23.934535 1364533 cri.go:89] found id: ""
	I1004 03:45:23.934543 1364533 logs.go:282] 2 containers: [c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d 5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160]
	I1004 03:45:23.934608 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:23.939226 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:23.949260 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1004 03:45:23.949341 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 03:45:24.037809 1364533 cri.go:89] found id: "fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91"
	I1004 03:45:24.037839 1364533 cri.go:89] found id: "1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23"
	I1004 03:45:24.037845 1364533 cri.go:89] found id: ""
	I1004 03:45:24.037854 1364533 logs.go:282] 2 containers: [fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91 1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23]
	I1004 03:45:24.037918 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.042625 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.046725 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1004 03:45:24.046804 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 03:45:24.091153 1364533 cri.go:89] found id: "7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36"
	I1004 03:45:24.091176 1364533 cri.go:89] found id: "db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321"
	I1004 03:45:24.091182 1364533 cri.go:89] found id: ""
	I1004 03:45:24.091190 1364533 logs.go:282] 2 containers: [7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36 db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321]
	I1004 03:45:24.091252 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.096386 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.100179 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 03:45:24.100627 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 03:45:24.141515 1364533 cri.go:89] found id: "82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f"
	I1004 03:45:24.141537 1364533 cri.go:89] found id: "f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287"
	I1004 03:45:24.141542 1364533 cri.go:89] found id: ""
	I1004 03:45:24.141549 1364533 logs.go:282] 2 containers: [82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287]
	I1004 03:45:24.141607 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.145484 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.149953 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1004 03:45:24.150031 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 03:45:24.198270 1364533 cri.go:89] found id: "f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515"
	I1004 03:45:24.198295 1364533 cri.go:89] found id: "9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b"
	I1004 03:45:24.198301 1364533 cri.go:89] found id: ""
	I1004 03:45:24.198308 1364533 logs.go:282] 2 containers: [f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515 9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b]
	I1004 03:45:24.198369 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.202066 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.205439 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1004 03:45:24.205531 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 03:45:24.243497 1364533 cri.go:89] found id: "04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf"
	I1004 03:45:24.243516 1364533 cri.go:89] found id: "b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77"
	I1004 03:45:24.243520 1364533 cri.go:89] found id: ""
	I1004 03:45:24.243528 1364533 logs.go:282] 2 containers: [04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77]
	I1004 03:45:24.243583 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.247179 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.250742 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 03:45:24.250847 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 03:45:24.288907 1364533 cri.go:89] found id: "e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920"
	I1004 03:45:24.288928 1364533 cri.go:89] found id: ""
	I1004 03:45:24.288936 1364533 logs.go:282] 1 containers: [e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920]
	I1004 03:45:24.289005 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:24.292588 1364533 logs.go:123] Gathering logs for kubelet ...
	I1004 03:45:24.292610 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1004 03:45:24.349611 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720678     671 reflector.go:138] object-"kube-system"/"kindnet-token-896vn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-896vn" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.349842 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720764     671 reflector.go:138] object-"kube-system"/"coredns-token-b58cn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b58cn" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.350077 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720820     671 reflector.go:138] object-"kube-system"/"storage-provisioner-token-qfjnb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-qfjnb" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.350294 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720869     671 reflector.go:138] object-"kube-system"/"kube-proxy-token-9x4ch": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-9x4ch" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.350514 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720917     671 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.350732 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720983     671 reflector.go:138] object-"default"/"default-token-mqxtr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-mqxtr" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.350957 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.721024     671 reflector.go:138] object-"kube-system"/"metrics-server-token-75zps": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-75zps" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.351164 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.721068     671 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:24.365714 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:11 old-k8s-version-445570 kubelet[671]: E1004 03:40:11.809865     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:24.365957 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:12 old-k8s-version-445570 kubelet[671]: E1004 03:40:12.524062     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.368777 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:25 old-k8s-version-445570 kubelet[671]: E1004 03:40:25.351183     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:24.371037 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:37 old-k8s-version-445570 kubelet[671]: E1004 03:40:37.341825     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.371505 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:37 old-k8s-version-445570 kubelet[671]: E1004 03:40:37.610610     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.371835 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:38 old-k8s-version-445570 kubelet[671]: E1004 03:40:38.614513     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.372162 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:39 old-k8s-version-445570 kubelet[671]: E1004 03:40:39.617155     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.375269 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:52 old-k8s-version-445570 kubelet[671]: E1004 03:40:52.345391     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:24.377047 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:53 old-k8s-version-445570 kubelet[671]: E1004 03:40:53.660236     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.377827 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:58 old-k8s-version-445570 kubelet[671]: E1004 03:40:58.315454     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.378262 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:07 old-k8s-version-445570 kubelet[671]: E1004 03:41:07.338008     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.378637 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:10 old-k8s-version-445570 kubelet[671]: E1004 03:41:10.337576     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.378827 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:19 old-k8s-version-445570 kubelet[671]: E1004 03:41:19.339131     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.379609 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:26 old-k8s-version-445570 kubelet[671]: E1004 03:41:26.778876     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.379944 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:28 old-k8s-version-445570 kubelet[671]: E1004 03:41:28.315566     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.385016 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:34 old-k8s-version-445570 kubelet[671]: E1004 03:41:34.346479     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:24.385377 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:41 old-k8s-version-445570 kubelet[671]: E1004 03:41:41.341220     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.385567 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:47 old-k8s-version-445570 kubelet[671]: E1004 03:41:47.343841     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.385897 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:55 old-k8s-version-445570 kubelet[671]: E1004 03:41:55.338126     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.386256 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:00 old-k8s-version-445570 kubelet[671]: E1004 03:42:00.341688     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.387160 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:07 old-k8s-version-445570 kubelet[671]: E1004 03:42:07.887419     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.387669 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:08 old-k8s-version-445570 kubelet[671]: E1004 03:42:08.891782     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.388543 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:14 old-k8s-version-445570 kubelet[671]: E1004 03:42:14.339176     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.388930 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:21 old-k8s-version-445570 kubelet[671]: E1004 03:42:21.337426     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.389382 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:28 old-k8s-version-445570 kubelet[671]: E1004 03:42:28.337831     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.389758 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:32 old-k8s-version-445570 kubelet[671]: E1004 03:42:32.337461     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.390044 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:41 old-k8s-version-445570 kubelet[671]: E1004 03:42:41.337927     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.390382 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:46 old-k8s-version-445570 kubelet[671]: E1004 03:42:46.337394     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.390566 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:54 old-k8s-version-445570 kubelet[671]: E1004 03:42:54.337799     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.391113 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:01 old-k8s-version-445570 kubelet[671]: E1004 03:43:01.341025     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.393984 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:08 old-k8s-version-445570 kubelet[671]: E1004 03:43:08.345159     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:24.394327 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:14 old-k8s-version-445570 kubelet[671]: E1004 03:43:14.337315     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.394516 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:22 old-k8s-version-445570 kubelet[671]: E1004 03:43:22.337815     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.395101 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:29 old-k8s-version-445570 kubelet[671]: E1004 03:43:29.135206     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.395284 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:33 old-k8s-version-445570 kubelet[671]: E1004 03:43:33.340492     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.395613 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:38 old-k8s-version-445570 kubelet[671]: E1004 03:43:38.315603     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.395799 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:48 old-k8s-version-445570 kubelet[671]: E1004 03:43:48.337782     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.396135 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:52 old-k8s-version-445570 kubelet[671]: E1004 03:43:52.337453     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.396336 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:00 old-k8s-version-445570 kubelet[671]: E1004 03:44:00.340379     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.396675 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:04 old-k8s-version-445570 kubelet[671]: E1004 03:44:04.337451     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.396992 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: E1004 03:44:15.338470     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.397196 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: E1004 03:44:15.338490     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.397420 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:26 old-k8s-version-445570 kubelet[671]: E1004 03:44:26.338825     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.397757 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:30 old-k8s-version-445570 kubelet[671]: E1004 03:44:30.337556     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.397947 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:37 old-k8s-version-445570 kubelet[671]: E1004 03:44:37.337773     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.398274 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:42 old-k8s-version-445570 kubelet[671]: E1004 03:44:42.337758     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.398460 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:48 old-k8s-version-445570 kubelet[671]: E1004 03:44:48.337763     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.398789 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:55 old-k8s-version-445570 kubelet[671]: E1004 03:44:55.342140     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.398974 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:00 old-k8s-version-445570 kubelet[671]: E1004 03:45:00.338647     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.399300 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:24.399484 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:24.399858 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	I1004 03:45:24.399876 1364533 logs.go:123] Gathering logs for kube-apiserver [2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f] ...
	I1004 03:45:24.399891 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f"
	I1004 03:45:24.460252 1364533 logs.go:123] Gathering logs for kube-proxy [db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321] ...
	I1004 03:45:24.460336 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321"
	I1004 03:45:24.511060 1364533 logs.go:123] Gathering logs for kube-controller-manager [f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287] ...
	I1004 03:45:24.511091 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287"
	I1004 03:45:24.599116 1364533 logs.go:123] Gathering logs for kubernetes-dashboard [e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920] ...
	I1004 03:45:24.599156 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920"
	I1004 03:45:24.645633 1364533 logs.go:123] Gathering logs for kube-apiserver [7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454] ...
	I1004 03:45:24.645667 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454"
	I1004 03:45:24.716785 1364533 logs.go:123] Gathering logs for etcd [5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08] ...
	I1004 03:45:24.716839 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08"
	I1004 03:45:24.769332 1364533 logs.go:123] Gathering logs for kube-scheduler [1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23] ...
	I1004 03:45:24.769364 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23"
	I1004 03:45:24.812217 1364533 logs.go:123] Gathering logs for kube-proxy [7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36] ...
	I1004 03:45:24.812247 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36"
	I1004 03:45:24.853299 1364533 logs.go:123] Gathering logs for kube-controller-manager [82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f] ...
	I1004 03:45:24.853327 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f"
	I1004 03:45:24.918755 1364533 logs.go:123] Gathering logs for kindnet [9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b] ...
	I1004 03:45:24.918793 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b"
	I1004 03:45:24.964680 1364533 logs.go:123] Gathering logs for storage-provisioner [04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf] ...
	I1004 03:45:24.964712 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf"
	I1004 03:45:25.009297 1364533 logs.go:123] Gathering logs for dmesg ...
	I1004 03:45:25.009330 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 03:45:25.032144 1364533 logs.go:123] Gathering logs for describe nodes ...
	I1004 03:45:25.032191 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 03:45:25.194250 1364533 logs.go:123] Gathering logs for etcd [cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec] ...
	I1004 03:45:25.194284 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec"
	I1004 03:45:25.234759 1364533 logs.go:123] Gathering logs for containerd ...
	I1004 03:45:25.234794 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1004 03:45:25.298098 1364533 logs.go:123] Gathering logs for container status ...
	I1004 03:45:25.298144 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 03:45:25.351916 1364533 logs.go:123] Gathering logs for coredns [c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d] ...
	I1004 03:45:25.351946 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d"
	I1004 03:45:25.391402 1364533 logs.go:123] Gathering logs for coredns [5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160] ...
	I1004 03:45:25.391433 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160"
	I1004 03:45:25.432888 1364533 logs.go:123] Gathering logs for kube-scheduler [fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91] ...
	I1004 03:45:25.432918 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91"
	I1004 03:45:25.473025 1364533 logs.go:123] Gathering logs for kindnet [f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515] ...
	I1004 03:45:25.473054 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515"
	I1004 03:45:25.521397 1364533 logs.go:123] Gathering logs for storage-provisioner [b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77] ...
	I1004 03:45:25.521430 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77"
	I1004 03:45:25.560531 1364533 out.go:358] Setting ErrFile to fd 2...
	I1004 03:45:25.560553 1364533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 03:45:25.560635 1364533 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1004 03:45:25.560647 1364533 out.go:270]   Oct 04 03:44:55 old-k8s-version-445570 kubelet[671]: E1004 03:44:55.342140     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	  Oct 04 03:44:55 old-k8s-version-445570 kubelet[671]: E1004 03:44:55.342140     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:25.560676 1364533 out.go:270]   Oct 04 03:45:00 old-k8s-version-445570 kubelet[671]: E1004 03:45:00.338647     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 04 03:45:00 old-k8s-version-445570 kubelet[671]: E1004 03:45:00.338647     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:25.560702 1364533 out.go:270]   Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	  Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:25.560730 1364533 out.go:270]   Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:25.560738 1364533 out.go:270]   Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	  Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	I1004 03:45:25.560744 1364533 out.go:358] Setting ErrFile to fd 2...
	I1004 03:45:25.560765 1364533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:45:35.561915 1364533 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:45:35.574040 1364533 api_server.go:72] duration metric: took 5m48.217077824s to wait for apiserver process to appear ...
	I1004 03:45:35.574065 1364533 api_server.go:88] waiting for apiserver healthz status ...
	I1004 03:45:35.574101 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1004 03:45:35.574170 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 03:45:35.617914 1364533 cri.go:89] found id: "7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454"
	I1004 03:45:35.617940 1364533 cri.go:89] found id: "2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f"
	I1004 03:45:35.617945 1364533 cri.go:89] found id: ""
	I1004 03:45:35.617952 1364533 logs.go:282] 2 containers: [7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454 2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f]
	I1004 03:45:35.618010 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.621667 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.625012 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1004 03:45:35.625083 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 03:45:35.662151 1364533 cri.go:89] found id: "5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08"
	I1004 03:45:35.662171 1364533 cri.go:89] found id: "cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec"
	I1004 03:45:35.662176 1364533 cri.go:89] found id: ""
	I1004 03:45:35.662183 1364533 logs.go:282] 2 containers: [5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08 cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec]
	I1004 03:45:35.662239 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.665793 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.669104 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1004 03:45:35.669193 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 03:45:35.706661 1364533 cri.go:89] found id: "c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d"
	I1004 03:45:35.706682 1364533 cri.go:89] found id: "5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160"
	I1004 03:45:35.706686 1364533 cri.go:89] found id: ""
	I1004 03:45:35.706693 1364533 logs.go:282] 2 containers: [c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d 5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160]
	I1004 03:45:35.706749 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.716172 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.720984 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1004 03:45:35.721057 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 03:45:35.759908 1364533 cri.go:89] found id: "fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91"
	I1004 03:45:35.759980 1364533 cri.go:89] found id: "1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23"
	I1004 03:45:35.759999 1364533 cri.go:89] found id: ""
	I1004 03:45:35.760022 1364533 logs.go:282] 2 containers: [fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91 1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23]
	I1004 03:45:35.760102 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.763646 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.766969 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1004 03:45:35.767068 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 03:45:35.813890 1364533 cri.go:89] found id: "7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36"
	I1004 03:45:35.813913 1364533 cri.go:89] found id: "db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321"
	I1004 03:45:35.813918 1364533 cri.go:89] found id: ""
	I1004 03:45:35.813924 1364533 logs.go:282] 2 containers: [7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36 db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321]
	I1004 03:45:35.814006 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.817498 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.820988 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 03:45:35.821056 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 03:45:35.861338 1364533 cri.go:89] found id: "82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f"
	I1004 03:45:35.861362 1364533 cri.go:89] found id: "f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287"
	I1004 03:45:35.861367 1364533 cri.go:89] found id: ""
	I1004 03:45:35.861374 1364533 logs.go:282] 2 containers: [82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287]
	I1004 03:45:35.861427 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.864955 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.869212 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1004 03:45:35.869331 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 03:45:35.907556 1364533 cri.go:89] found id: "f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515"
	I1004 03:45:35.907588 1364533 cri.go:89] found id: "9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b"
	I1004 03:45:35.907593 1364533 cri.go:89] found id: ""
	I1004 03:45:35.907600 1364533 logs.go:282] 2 containers: [f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515 9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b]
	I1004 03:45:35.907689 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.911221 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.914458 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1004 03:45:35.914554 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1004 03:45:35.958883 1364533 cri.go:89] found id: "e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920"
	I1004 03:45:35.958958 1364533 cri.go:89] found id: ""
	I1004 03:45:35.958980 1364533 logs.go:282] 1 containers: [e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920]
	I1004 03:45:35.959043 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:35.962624 1364533 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1004 03:45:35.962724 1364533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 03:45:36.020750 1364533 cri.go:89] found id: "04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf"
	I1004 03:45:36.020772 1364533 cri.go:89] found id: "b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77"
	I1004 03:45:36.020777 1364533 cri.go:89] found id: ""
	I1004 03:45:36.020784 1364533 logs.go:282] 2 containers: [04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77]
	I1004 03:45:36.020845 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:36.024770 1364533 ssh_runner.go:195] Run: which crictl
	I1004 03:45:36.029362 1364533 logs.go:123] Gathering logs for kube-scheduler [1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23] ...
	I1004 03:45:36.029391 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23"
	I1004 03:45:36.091438 1364533 logs.go:123] Gathering logs for etcd [5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08] ...
	I1004 03:45:36.092069 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08"
	I1004 03:45:36.158735 1364533 logs.go:123] Gathering logs for etcd [cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec] ...
	I1004 03:45:36.158835 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec"
	I1004 03:45:36.224436 1364533 logs.go:123] Gathering logs for coredns [c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d] ...
	I1004 03:45:36.224468 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d"
	I1004 03:45:36.280120 1364533 logs.go:123] Gathering logs for coredns [5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160] ...
	I1004 03:45:36.280149 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160"
	I1004 03:45:36.346943 1364533 logs.go:123] Gathering logs for kindnet [9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b] ...
	I1004 03:45:36.346972 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b"
	I1004 03:45:36.398339 1364533 logs.go:123] Gathering logs for kubernetes-dashboard [e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920] ...
	I1004 03:45:36.398369 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920"
	I1004 03:45:36.449141 1364533 logs.go:123] Gathering logs for storage-provisioner [b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77] ...
	I1004 03:45:36.449176 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77"
	I1004 03:45:36.511292 1364533 logs.go:123] Gathering logs for containerd ...
	I1004 03:45:36.511322 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1004 03:45:36.585926 1364533 logs.go:123] Gathering logs for kubelet ...
	I1004 03:45:36.585964 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1004 03:45:36.646922 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720678     671 reflector.go:138] object-"kube-system"/"kindnet-token-896vn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-896vn" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.647168 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720764     671 reflector.go:138] object-"kube-system"/"coredns-token-b58cn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b58cn" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.647399 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720820     671 reflector.go:138] object-"kube-system"/"storage-provisioner-token-qfjnb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-qfjnb" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.647648 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720869     671 reflector.go:138] object-"kube-system"/"kube-proxy-token-9x4ch": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-9x4ch" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.647851 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720917     671 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.648058 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.720983     671 reflector.go:138] object-"default"/"default-token-mqxtr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-mqxtr" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.648288 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.721024     671 reflector.go:138] object-"kube-system"/"metrics-server-token-75zps": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-75zps" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.648496 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:07 old-k8s-version-445570 kubelet[671]: E1004 03:40:07.721068     671 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-445570" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-445570' and this object
	W1004 03:45:36.663090 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:11 old-k8s-version-445570 kubelet[671]: E1004 03:40:11.809865     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:36.663300 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:12 old-k8s-version-445570 kubelet[671]: E1004 03:40:12.524062     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.666126 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:25 old-k8s-version-445570 kubelet[671]: E1004 03:40:25.351183     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:36.668746 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:37 old-k8s-version-445570 kubelet[671]: E1004 03:40:37.341825     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.669287 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:37 old-k8s-version-445570 kubelet[671]: E1004 03:40:37.610610     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.669734 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:38 old-k8s-version-445570 kubelet[671]: E1004 03:40:38.614513     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.670214 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:39 old-k8s-version-445570 kubelet[671]: E1004 03:40:39.617155     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.673263 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:52 old-k8s-version-445570 kubelet[671]: E1004 03:40:52.345391     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:36.674006 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:53 old-k8s-version-445570 kubelet[671]: E1004 03:40:53.660236     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.674409 1364533 logs.go:138] Found kubelet problem: Oct 04 03:40:58 old-k8s-version-445570 kubelet[671]: E1004 03:40:58.315454     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.674619 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:07 old-k8s-version-445570 kubelet[671]: E1004 03:41:07.338008     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.675004 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:10 old-k8s-version-445570 kubelet[671]: E1004 03:41:10.337576     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.675228 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:19 old-k8s-version-445570 kubelet[671]: E1004 03:41:19.339131     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.675982 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:26 old-k8s-version-445570 kubelet[671]: E1004 03:41:26.778876     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.676373 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:28 old-k8s-version-445570 kubelet[671]: E1004 03:41:28.315566     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.679056 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:34 old-k8s-version-445570 kubelet[671]: E1004 03:41:34.346479     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:36.679435 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:41 old-k8s-version-445570 kubelet[671]: E1004 03:41:41.341220     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.679620 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:47 old-k8s-version-445570 kubelet[671]: E1004 03:41:47.343841     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.679941 1364533 logs.go:138] Found kubelet problem: Oct 04 03:41:55 old-k8s-version-445570 kubelet[671]: E1004 03:41:55.338126     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.680146 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:00 old-k8s-version-445570 kubelet[671]: E1004 03:42:00.341688     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.680766 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:07 old-k8s-version-445570 kubelet[671]: E1004 03:42:07.887419     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.681104 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:08 old-k8s-version-445570 kubelet[671]: E1004 03:42:08.891782     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.681291 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:14 old-k8s-version-445570 kubelet[671]: E1004 03:42:14.339176     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.681617 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:21 old-k8s-version-445570 kubelet[671]: E1004 03:42:21.337426     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.681799 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:28 old-k8s-version-445570 kubelet[671]: E1004 03:42:28.337831     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.682122 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:32 old-k8s-version-445570 kubelet[671]: E1004 03:42:32.337461     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.682301 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:41 old-k8s-version-445570 kubelet[671]: E1004 03:42:41.337927     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.682633 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:46 old-k8s-version-445570 kubelet[671]: E1004 03:42:46.337394     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.682819 1364533 logs.go:138] Found kubelet problem: Oct 04 03:42:54 old-k8s-version-445570 kubelet[671]: E1004 03:42:54.337799     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.683145 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:01 old-k8s-version-445570 kubelet[671]: E1004 03:43:01.341025     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.685721 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:08 old-k8s-version-445570 kubelet[671]: E1004 03:43:08.345159     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1004 03:45:36.686056 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:14 old-k8s-version-445570 kubelet[671]: E1004 03:43:14.337315     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.686242 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:22 old-k8s-version-445570 kubelet[671]: E1004 03:43:22.337815     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.686830 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:29 old-k8s-version-445570 kubelet[671]: E1004 03:43:29.135206     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.687012 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:33 old-k8s-version-445570 kubelet[671]: E1004 03:43:33.340492     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.687380 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:38 old-k8s-version-445570 kubelet[671]: E1004 03:43:38.315603     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.687579 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:48 old-k8s-version-445570 kubelet[671]: E1004 03:43:48.337782     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.687935 1364533 logs.go:138] Found kubelet problem: Oct 04 03:43:52 old-k8s-version-445570 kubelet[671]: E1004 03:43:52.337453     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.688125 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:00 old-k8s-version-445570 kubelet[671]: E1004 03:44:00.340379     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.688471 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:04 old-k8s-version-445570 kubelet[671]: E1004 03:44:04.337451     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.688815 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: E1004 03:44:15.338470     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.689023 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: E1004 03:44:15.338490     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.689214 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:26 old-k8s-version-445570 kubelet[671]: E1004 03:44:26.338825     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.689766 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:30 old-k8s-version-445570 kubelet[671]: E1004 03:44:30.337556     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.689963 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:37 old-k8s-version-445570 kubelet[671]: E1004 03:44:37.337773     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.690288 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:42 old-k8s-version-445570 kubelet[671]: E1004 03:44:42.337758     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.690476 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:48 old-k8s-version-445570 kubelet[671]: E1004 03:44:48.337763     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.690800 1364533 logs.go:138] Found kubelet problem: Oct 04 03:44:55 old-k8s-version-445570 kubelet[671]: E1004 03:44:55.342140     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.690983 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:00 old-k8s-version-445570 kubelet[671]: E1004 03:45:00.338647     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.692661 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.692859 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.693190 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:36.693373 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:27 old-k8s-version-445570 kubelet[671]: E1004 03:45:27.339413     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:36.695156 1364533 logs.go:138] Found kubelet problem: Oct 04 03:45:35 old-k8s-version-445570 kubelet[671]: E1004 03:45:35.337992     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	I1004 03:45:36.695182 1364533 logs.go:123] Gathering logs for describe nodes ...
	I1004 03:45:36.695197 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 03:45:36.893650 1364533 logs.go:123] Gathering logs for kube-apiserver [7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454] ...
	I1004 03:45:36.893681 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454"
	I1004 03:45:36.998974 1364533 logs.go:123] Gathering logs for kindnet [f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515] ...
	I1004 03:45:36.999014 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515"
	I1004 03:45:37.066175 1364533 logs.go:123] Gathering logs for container status ...
	I1004 03:45:37.066214 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 03:45:37.144258 1364533 logs.go:123] Gathering logs for storage-provisioner [04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf] ...
	I1004 03:45:37.144315 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf"
	I1004 03:45:37.198229 1364533 logs.go:123] Gathering logs for dmesg ...
	I1004 03:45:37.198264 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 03:45:37.216564 1364533 logs.go:123] Gathering logs for kube-scheduler [fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91] ...
	I1004 03:45:37.216601 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91"
	I1004 03:45:37.279944 1364533 logs.go:123] Gathering logs for kube-proxy [7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36] ...
	I1004 03:45:37.279978 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36"
	I1004 03:45:37.332522 1364533 logs.go:123] Gathering logs for kube-proxy [db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321] ...
	I1004 03:45:37.332550 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321"
	I1004 03:45:37.401262 1364533 logs.go:123] Gathering logs for kube-apiserver [2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f] ...
	I1004 03:45:37.401296 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f"
	I1004 03:45:37.463550 1364533 logs.go:123] Gathering logs for kube-controller-manager [82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f] ...
	I1004 03:45:37.463599 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f"
	I1004 03:45:37.568577 1364533 logs.go:123] Gathering logs for kube-controller-manager [f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287] ...
	I1004 03:45:37.568611 1364533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287"
	I1004 03:45:37.673677 1364533 out.go:358] Setting ErrFile to fd 2...
	I1004 03:45:37.673705 1364533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1004 03:45:37.673761 1364533 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1004 03:45:37.673769 1364533 out.go:270]   Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	  Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:37.673788 1364533 out.go:270]   Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:37.673797 1364533 out.go:270]   Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	  Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	W1004 03:45:37.673802 1364533 out.go:270]   Oct 04 03:45:27 old-k8s-version-445570 kubelet[671]: E1004 03:45:27.339413     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 04 03:45:27 old-k8s-version-445570 kubelet[671]: E1004 03:45:27.339413     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1004 03:45:37.673809 1364533 out.go:270]   Oct 04 03:45:35 old-k8s-version-445570 kubelet[671]: E1004 03:45:35.337992     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	  Oct 04 03:45:35 old-k8s-version-445570 kubelet[671]: E1004 03:45:35.337992     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	I1004 03:45:37.673813 1364533 out.go:358] Setting ErrFile to fd 2...
	I1004 03:45:37.673821 1364533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:45:47.675349 1364533 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1004 03:45:47.688748 1364533 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1004 03:45:47.741147 1364533 out.go:201] 
	W1004 03:45:47.765419 1364533 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1004 03:45:47.765461 1364533 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1004 03:45:47.765480 1364533 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1004 03:45:47.765485 1364533 out.go:270] * 
	* 
	W1004 03:45:47.766343 1364533 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 03:45:47.797913 1364533 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-445570 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-445570
helpers_test.go:235: (dbg) docker inspect old-k8s-version-445570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d441e0f4417540d052bbcffb3de00f9e209050733a427ae19c1053e0076b53",
	        "Created": "2024-10-04T03:36:27.277899456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1364731,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-04T03:39:39.539218687Z",
	            "FinishedAt": "2024-10-04T03:39:38.47895341Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/53d441e0f4417540d052bbcffb3de00f9e209050733a427ae19c1053e0076b53/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d441e0f4417540d052bbcffb3de00f9e209050733a427ae19c1053e0076b53/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d441e0f4417540d052bbcffb3de00f9e209050733a427ae19c1053e0076b53/hosts",
	        "LogPath": "/var/lib/docker/containers/53d441e0f4417540d052bbcffb3de00f9e209050733a427ae19c1053e0076b53/53d441e0f4417540d052bbcffb3de00f9e209050733a427ae19c1053e0076b53-json.log",
	        "Name": "/old-k8s-version-445570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-445570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-445570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d4cec1bb00851aec00976a8d9ad32c710487e4427d64029f4f63c539f20da7ee-init/diff:/var/lib/docker/overlay2/3fd4f374838913cfff21eeb0320112c1c5932de8178b660a56df0e13b7402d74/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4cec1bb00851aec00976a8d9ad32c710487e4427d64029f4f63c539f20da7ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4cec1bb00851aec00976a8d9ad32c710487e4427d64029f4f63c539f20da7ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4cec1bb00851aec00976a8d9ad32c710487e4427d64029f4f63c539f20da7ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-445570",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-445570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-445570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-445570",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-445570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53e7bfecaf0ce21860e1984b85c65ac72115ddd4ce814175a99e877df7263ac4",
	            "SandboxKey": "/var/run/docker/netns/53e7bfecaf0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-445570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4d255da3c2e5a150ca577d15d5bd841fb0968b57142c23debebbe9468bcad537",
	                    "EndpointID": "0234b99f3390763ff32d9aa251e4a5d5c8defbda8307c360102ec8cf4b1c2d27",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-445570",
	                        "53d441e0f441"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-445570 -n old-k8s-version-445570
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-445570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-445570 logs -n 25: (2.784329076s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-671389                              | cert-expiration-671389   | jenkins | v1.34.0 | 04 Oct 24 03:35 UTC | 04 Oct 24 03:36 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-103536                               | force-systemd-env-103536 | jenkins | v1.34.0 | 04 Oct 24 03:35 UTC | 04 Oct 24 03:35 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-103536                            | force-systemd-env-103536 | jenkins | v1.34.0 | 04 Oct 24 03:35 UTC | 04 Oct 24 03:35 UTC |
	| start   | -p cert-options-884103                                 | cert-options-884103      | jenkins | v1.34.0 | 04 Oct 24 03:35 UTC | 04 Oct 24 03:36 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-884103 ssh                                | cert-options-884103      | jenkins | v1.34.0 | 04 Oct 24 03:36 UTC | 04 Oct 24 03:36 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-884103 -- sudo                         | cert-options-884103      | jenkins | v1.34.0 | 04 Oct 24 03:36 UTC | 04 Oct 24 03:36 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-884103                                 | cert-options-884103      | jenkins | v1.34.0 | 04 Oct 24 03:36 UTC | 04 Oct 24 03:36 UTC |
	| start   | -p old-k8s-version-445570                              | old-k8s-version-445570   | jenkins | v1.34.0 | 04 Oct 24 03:36 UTC | 04 Oct 24 03:39 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-671389                              | cert-expiration-671389   | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC | 04 Oct 24 03:39 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-671389                              | cert-expiration-671389   | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC | 04 Oct 24 03:39 UTC |
	| start   | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC | 04 Oct 24 03:40 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-445570        | old-k8s-version-445570   | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC | 04 Oct 24 03:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-445570                              | old-k8s-version-445570   | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC | 04 Oct 24 03:39 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-445570             | old-k8s-version-445570   | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC | 04 Oct 24 03:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-445570                              | old-k8s-version-445570   | jenkins | v1.34.0 | 04 Oct 24 03:39 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-554493             | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:40 UTC | 04 Oct 24 03:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:40 UTC | 04 Oct 24 03:40 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-554493                  | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:40 UTC | 04 Oct 24 03:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:40 UTC | 04 Oct 24 03:45 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-554493 image list                           | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:45 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:45 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:45 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:45 UTC |
	| delete  | -p no-preload-554493                                   | no-preload-554493        | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC | 04 Oct 24 03:45 UTC |
	| start   | -p embed-certs-690742                                  | embed-certs-690742       | jenkins | v1.34.0 | 04 Oct 24 03:45 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 03:45:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 03:45:42.884494 1375632 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:45:42.884618 1375632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:45:42.884627 1375632 out.go:358] Setting ErrFile to fd 2...
	I1004 03:45:42.884633 1375632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:45:42.884872 1375632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:45:42.885307 1375632 out.go:352] Setting JSON to false
	I1004 03:45:42.886442 1375632 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26891,"bootTime":1727986652,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 03:45:42.886514 1375632 start.go:139] virtualization:  
	I1004 03:45:42.889190 1375632 out.go:177] * [embed-certs-690742] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:45:42.891995 1375632 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:45:42.892124 1375632 notify.go:220] Checking for updates...
	I1004 03:45:42.895534 1375632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:45:42.897765 1375632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:45:42.899990 1375632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 03:45:42.901773 1375632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:45:42.903446 1375632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:45:42.905971 1375632 config.go:182] Loaded profile config "old-k8s-version-445570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1004 03:45:42.906083 1375632 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:45:42.936395 1375632 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:45:42.936512 1375632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:45:42.990583 1375632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:45:42.980301172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:45:42.990689 1375632 docker.go:318] overlay module found
	I1004 03:45:42.992732 1375632 out.go:177] * Using the docker driver based on user configuration
	I1004 03:45:42.994423 1375632 start.go:297] selected driver: docker
	I1004 03:45:42.994441 1375632 start.go:901] validating driver "docker" against <nil>
	I1004 03:45:42.994455 1375632 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:45:42.995096 1375632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:45:43.057341 1375632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:45:43.048129251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:45:43.057546 1375632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 03:45:43.057767 1375632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 03:45:43.059735 1375632 out.go:177] * Using Docker driver with root privileges
	I1004 03:45:43.061527 1375632 cni.go:84] Creating CNI manager for ""
	I1004 03:45:43.061588 1375632 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 03:45:43.061601 1375632 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 03:45:43.061679 1375632 start.go:340] cluster config:
	{Name:embed-certs-690742 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-690742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:45:43.063494 1375632 out.go:177] * Starting "embed-certs-690742" primary control-plane node in "embed-certs-690742" cluster
	I1004 03:45:43.065103 1375632 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1004 03:45:43.067121 1375632 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1004 03:45:43.068864 1375632 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 03:45:43.068868 1375632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 03:45:43.068927 1375632 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1004 03:45:43.068935 1375632 cache.go:56] Caching tarball of preloaded images
	I1004 03:45:43.069014 1375632 preload.go:172] Found /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1004 03:45:43.069024 1375632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1004 03:45:43.069128 1375632 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/embed-certs-690742/config.json ...
	I1004 03:45:43.069145 1375632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/embed-certs-690742/config.json: {Name:mkf7865187de804637fa3d0890640c1689051ba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 03:45:43.088034 1375632 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1004 03:45:43.088056 1375632 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1004 03:45:43.088069 1375632 cache.go:194] Successfully downloaded all kic artifacts
	I1004 03:45:43.088097 1375632 start.go:360] acquireMachinesLock for embed-certs-690742: {Name:mk20ca3433e0c49bc2ecfdb24b72603f90bc0b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 03:45:43.088812 1375632 start.go:364] duration metric: took 692.036µs to acquireMachinesLock for "embed-certs-690742"
	I1004 03:45:43.088850 1375632 start.go:93] Provisioning new machine with config: &{Name:embed-certs-690742 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-690742 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1004 03:45:43.088933 1375632 start.go:125] createHost starting for "" (driver="docker")
	I1004 03:45:47.675349 1364533 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1004 03:45:47.688748 1364533 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1004 03:45:47.741147 1364533 out.go:201] 
	W1004 03:45:47.765419 1364533 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1004 03:45:47.765461 1364533 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1004 03:45:47.765480 1364533 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1004 03:45:47.765485 1364533 out.go:270] * 
	W1004 03:45:47.766343 1364533 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 03:45:47.797913 1364533 out.go:201] 
	I1004 03:45:43.091554 1375632 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1004 03:45:43.091836 1375632 start.go:159] libmachine.API.Create for "embed-certs-690742" (driver="docker")
	I1004 03:45:43.091870 1375632 client.go:168] LocalClient.Create starting
	I1004 03:45:43.091939 1375632 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/ca.pem
	I1004 03:45:43.091973 1375632 main.go:141] libmachine: Decoding PEM data...
	I1004 03:45:43.091992 1375632 main.go:141] libmachine: Parsing certificate...
	I1004 03:45:43.092049 1375632 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19546-1149434/.minikube/certs/cert.pem
	I1004 03:45:43.092086 1375632 main.go:141] libmachine: Decoding PEM data...
	I1004 03:45:43.092100 1375632 main.go:141] libmachine: Parsing certificate...
	I1004 03:45:43.092558 1375632 cli_runner.go:164] Run: docker network inspect embed-certs-690742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1004 03:45:43.114089 1375632 cli_runner.go:211] docker network inspect embed-certs-690742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1004 03:45:43.114174 1375632 network_create.go:284] running [docker network inspect embed-certs-690742] to gather additional debugging logs...
	I1004 03:45:43.114197 1375632 cli_runner.go:164] Run: docker network inspect embed-certs-690742
	W1004 03:45:43.129367 1375632 cli_runner.go:211] docker network inspect embed-certs-690742 returned with exit code 1
	I1004 03:45:43.129400 1375632 network_create.go:287] error running [docker network inspect embed-certs-690742]: docker network inspect embed-certs-690742: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-690742 not found
	I1004 03:45:43.129414 1375632 network_create.go:289] output of [docker network inspect embed-certs-690742]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-690742 not found
	
	** /stderr **
	I1004 03:45:43.129558 1375632 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1004 03:45:43.144067 1375632 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9a360696897c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9f:3a:5e:6c} reservation:<nil>}
	I1004 03:45:43.144627 1375632 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b255e34ff22 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c8:25:72:87} reservation:<nil>}
	I1004 03:45:43.145085 1375632 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-196dd403b54f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:45:3d:82:93} reservation:<nil>}
	I1004 03:45:43.145452 1375632 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4d255da3c2e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:bf:3d:c0:9e} reservation:<nil>}
	I1004 03:45:43.145972 1375632 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001920000}
	I1004 03:45:43.145996 1375632 network_create.go:124] attempt to create docker network embed-certs-690742 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1004 03:45:43.146055 1375632 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-690742 embed-certs-690742
	I1004 03:45:43.224890 1375632 network_create.go:108] docker network embed-certs-690742 192.168.85.0/24 created
	I1004 03:45:43.224925 1375632 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-690742" container
	I1004 03:45:43.225019 1375632 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1004 03:45:43.239022 1375632 cli_runner.go:164] Run: docker volume create embed-certs-690742 --label name.minikube.sigs.k8s.io=embed-certs-690742 --label created_by.minikube.sigs.k8s.io=true
	I1004 03:45:43.256524 1375632 oci.go:103] Successfully created a docker volume embed-certs-690742
	I1004 03:45:43.256619 1375632 cli_runner.go:164] Run: docker run --rm --name embed-certs-690742-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-690742 --entrypoint /usr/bin/test -v embed-certs-690742:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1004 03:45:43.881965 1375632 oci.go:107] Successfully prepared a docker volume embed-certs-690742
	I1004 03:45:43.882024 1375632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 03:45:43.882045 1375632 kic.go:194] Starting extracting preloaded images to volume ...
	I1004 03:45:43.882121 1375632 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-690742:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	541099ab654ad       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   39a59314e83fb       dashboard-metrics-scraper-8d5bb5db8-brtqr
	e25a99d73dc12       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   a68df72e19ad8       kubernetes-dashboard-cd95d586-psgpt
	f8aacb2fb53da       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   6bff25a9fbcf6       kindnet-56kgm
	04caef5abf18e       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   0a1f920990ae2       storage-provisioner
	c65ea8441f9c5       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e4d92cce003c2       coredns-74ff55c5b-d25t5
	7a1106279c108       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   514ddeb834289       kube-proxy-tm9p4
	72d7d731f2d17       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   2291d979d96b0       busybox
	fd61154c785ee       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   96a415f44644b       kube-scheduler-old-k8s-version-445570
	82b8e7498dd88       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   47ef51c82fb03       kube-controller-manager-old-k8s-version-445570
	5436a81bb30f9       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   299126b4ba4ff       etcd-old-k8s-version-445570
	7295a2d0ecfbb       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   0835e665ec181       kube-apiserver-old-k8s-version-445570
	88ad5c2cefc35       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   9cc6fe0921e37       busybox
	5b535bea1bc53       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   f49bfd82584d4       coredns-74ff55c5b-d25t5
	9b85407f81192       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   bd1865e1916fc       kindnet-56kgm
	b199a132a6523       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   5ff96f6895330       storage-provisioner
	db56e60ccb26a       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   657fd06fc071b       kube-proxy-tm9p4
	f0343958cd2a6       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   45b4f3532885a       kube-controller-manager-old-k8s-version-445570
	1f76481f61cbe       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   e56558cdb61fe       kube-scheduler-old-k8s-version-445570
	2ca331c9606d2       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   8025a67c4cbbb       kube-apiserver-old-k8s-version-445570
	cf53ebe3031ec       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   79667dbbe1678       etcd-old-k8s-version-445570
	
	
	==> containerd <==
	Oct 04 03:41:34 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:41:34.343575688Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 04 03:41:34 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:41:34.345137484Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 04 03:41:34 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:41:34.345236085Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.340877475Z" level=info msg="CreateContainer within sandbox \"39a59314e83fb2452394a91abe5e606ca0b0fac0dbea00438355f2a79580e5b9\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.362537195Z" level=info msg="CreateContainer within sandbox \"39a59314e83fb2452394a91abe5e606ca0b0fac0dbea00438355f2a79580e5b9\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1\""
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.363300452Z" level=info msg="StartContainer for \"46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1\""
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.448059053Z" level=info msg="StartContainer for \"46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1\" returns successfully"
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.474987947Z" level=info msg="shim disconnected" id=46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1 namespace=k8s.io
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.475052866Z" level=warning msg="cleaning up after shim disconnected" id=46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1 namespace=k8s.io
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.475074397Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.898139471Z" level=info msg="RemoveContainer for \"90da46afcbe44648b2eb32cb1cd5c826ef87c223d2054ef51472000b5cbcbbfe\""
	Oct 04 03:42:07 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:42:07.905825185Z" level=info msg="RemoveContainer for \"90da46afcbe44648b2eb32cb1cd5c826ef87c223d2054ef51472000b5cbcbbfe\" returns successfully"
	Oct 04 03:43:08 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:08.338367552Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:43:08 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:08.343062226Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 04 03:43:08 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:08.344678831Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 04 03:43:08 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:08.344684468Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.339287343Z" level=info msg="CreateContainer within sandbox \"39a59314e83fb2452394a91abe5e606ca0b0fac0dbea00438355f2a79580e5b9\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.354553104Z" level=info msg="CreateContainer within sandbox \"39a59314e83fb2452394a91abe5e606ca0b0fac0dbea00438355f2a79580e5b9\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac\""
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.355372073Z" level=info msg="StartContainer for \"541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac\""
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.432770645Z" level=info msg="StartContainer for \"541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac\" returns successfully"
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.462225344Z" level=info msg="shim disconnected" id=541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac namespace=k8s.io
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.462290509Z" level=warning msg="cleaning up after shim disconnected" id=541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac namespace=k8s.io
	Oct 04 03:43:28 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:28.462300913Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 04 03:43:29 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:29.139875282Z" level=info msg="RemoveContainer for \"46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1\""
	Oct 04 03:43:29 old-k8s-version-445570 containerd[570]: time="2024-10-04T03:43:29.162340689Z" level=info msg="RemoveContainer for \"46a6636b23643ba2a917c4748907eb005f58f3c610509360ac3f4c670e6fa1f1\" returns successfully"
	
	
	==> coredns [5b535bea1bc5353fd89eb2f07b641c965bcf0920d107f3e61183ce2e39af4160] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33719 - 54397 "HINFO IN 1863994683067902719.8725812619912626578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005646324s
	
	
	==> coredns [c65ea8441f9c58b1b1bad277473595b6c22ee03a26bf15c60bedacb50f08105d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:45252 - 73 "HINFO IN 5779111556160776051.4278965858038898769. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00543611s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-445570
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-445570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=old-k8s-version-445570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_04T03_37_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:37:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-445570
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:45:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:41:00 +0000   Fri, 04 Oct 2024 03:36:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:41:00 +0000   Fri, 04 Oct 2024 03:36:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:41:00 +0000   Fri, 04 Oct 2024 03:36:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:41:00 +0000   Fri, 04 Oct 2024 03:37:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-445570
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 45110b9e1c304cfebedfd4551012ef48
	  System UUID:                8355de09-cc63-4b7c-9a1c-cb7a1f8036f2
	  Boot ID:                    c9bb91eb-f5c3-4f81-9b8d-aca1ad72b7b9
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-d25t5                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m30s
	  kube-system                 etcd-old-k8s-version-445570                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m37s
	  kube-system                 kindnet-56kgm                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m30s
	  kube-system                 kube-apiserver-old-k8s-version-445570             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-controller-manager-old-k8s-version-445570    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-proxy-tm9p4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-scheduler-old-k8s-version-445570             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 metrics-server-9975d5f86-7vbmk                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m24s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-brtqr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-psgpt               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-445570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-445570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x4 over 8m58s)  kubelet     Node old-k8s-version-445570 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m38s                  kubelet     Node old-k8s-version-445570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m38s                  kubelet     Node old-k8s-version-445570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s                  kubelet     Node old-k8s-version-445570 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m30s                  kubelet     Node old-k8s-version-445570 status is now: NodeReady
	  Normal  Starting                 8m29s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-445570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x7 over 5m55s)  kubelet     Node old-k8s-version-445570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-445570 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct 4 02:19] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.194222] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.045203] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [5436a81bb30f9621df50a410542d13c170eb3978f3fca9a4b268ce4954e20d08] <==
	2024-10-04 03:41:50.215149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:42:00.225558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:42:10.215255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:42:20.215142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:42:30.215402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:42:40.215226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:42:50.215213 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:43:00.215949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:43:10.215120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:43:20.215240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:43:30.215319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:43:40.215120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:43:50.215189 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:44:00.241033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:44:10.215246 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:44:20.215264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:44:30.215249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:44:40.215218 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:44:50.215128 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:45:00.220824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:45:10.215167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:45:20.215301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:45:30.215275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:45:40.215954 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:45:50.215301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [cf53ebe3031ec2133d6246e747d96d7f552d2736d7e6cd96300f5a2ac21352ec] <==
	raft2024/10/04 03:36:54 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/04 03:36:54 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/04 03:36:54 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-04 03:36:54.371984 I | embed: ready to serve client requests
	2024-10-04 03:36:54.373404 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-04 03:36:54.373592 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-04 03:36:54.373891 I | embed: ready to serve client requests
	2024-10-04 03:36:54.375128 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-04 03:36:54.381226 I | etcdserver: published {Name:old-k8s-version-445570 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-04 03:36:54.404023 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-04 03:36:54.404676 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-04 03:37:16.678140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:37:23.608387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:37:33.608215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:37:43.608376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:37:53.608135 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:38:03.608426 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:38:13.608232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:38:23.608465 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:38:33.608572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:38:43.608378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:38:53.608244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:39:03.608307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:39:13.613151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-04 03:39:23.608413 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 03:45:50 up  7:28,  0 users,  load average: 1.40, 1.82, 2.36
	Linux old-k8s-version-445570 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9b85407f811920c6bd4bf65c40d73c3828a80edb440329a260629dadac6c325b] <==
	I1004 03:37:25.002460       1 controller.go:374] Syncing nftables rules
	I1004 03:37:34.817577       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:37:34.817700       1 main.go:299] handling current node
	I1004 03:37:44.817800       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:37:44.817901       1 main.go:299] handling current node
	I1004 03:37:54.817235       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:37:54.817273       1 main.go:299] handling current node
	I1004 03:38:04.827844       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:38:04.827879       1 main.go:299] handling current node
	I1004 03:38:14.822322       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:38:14.822361       1 main.go:299] handling current node
	I1004 03:38:24.817244       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:38:24.817278       1 main.go:299] handling current node
	I1004 03:38:34.824026       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:38:34.824060       1 main.go:299] handling current node
	I1004 03:38:44.820365       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:38:44.820398       1 main.go:299] handling current node
	I1004 03:38:54.824920       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:38:54.824957       1 main.go:299] handling current node
	I1004 03:39:04.823801       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:39:04.823834       1 main.go:299] handling current node
	I1004 03:39:14.817531       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:39:14.817564       1 main.go:299] handling current node
	I1004 03:39:24.817820       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:39:24.817860       1 main.go:299] handling current node
	
	
	==> kindnet [f8aacb2fb53da0491e7751aed07aea89402a0735466b779e4d04bf40f82a3515] <==
	I1004 03:43:42.704568       1 main.go:299] handling current node
	I1004 03:43:52.708380       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:43:52.708473       1 main.go:299] handling current node
	I1004 03:44:02.710333       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:44:02.710535       1 main.go:299] handling current node
	I1004 03:44:12.701900       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:44:12.701931       1 main.go:299] handling current node
	I1004 03:44:22.708359       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:44:22.708394       1 main.go:299] handling current node
	I1004 03:44:32.708362       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:44:32.708398       1 main.go:299] handling current node
	I1004 03:44:42.707266       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:44:42.708183       1 main.go:299] handling current node
	I1004 03:44:52.713643       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:44:52.713675       1 main.go:299] handling current node
	I1004 03:45:02.709817       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:45:02.709856       1 main.go:299] handling current node
	I1004 03:45:12.701704       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:45:12.701801       1 main.go:299] handling current node
	I1004 03:45:22.708169       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:45:22.708420       1 main.go:299] handling current node
	I1004 03:45:32.706162       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:45:32.706198       1 main.go:299] handling current node
	I1004 03:45:42.714477       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1004 03:45:42.714515       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2ca331c9606d2e3fdf73e195154882978124a0238ea8e8961401d22f0b7cd51f] <==
	I1004 03:37:02.168411       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1004 03:37:02.696780       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:37:02.740210       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1004 03:37:02.844767       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1004 03:37:02.845811       1 controller.go:606] quota admission added evaluator for: endpoints
	I1004 03:37:02.850090       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:37:03.820244       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1004 03:37:04.336611       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1004 03:37:04.436500       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1004 03:37:12.767049       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:37:20.393783       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1004 03:37:20.395636       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1004 03:37:25.769201       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:37:25.769246       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:37:25.769257       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1004 03:37:56.308955       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:37:56.309071       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:37:56.309140       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1004 03:38:39.979921       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:38:39.979970       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:38:39.979985       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1004 03:39:21.237529       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:39:21.237578       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:39:21.237586       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E1004 03:39:24.463622       1 upgradeaware.go:387] Error proxying data from backend to client: write tcp 192.168.76.2:8443->192.168.76.1:55714: write: connection reset by peer
	
	
	==> kube-apiserver [7295a2d0ecfbb037cef3fc882b64353c1e4575b1c95d6af1c6fc9c96a1117454] <==
	I1004 03:42:27.406920       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:42:27.406950       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1004 03:43:03.791600       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:43:03.791662       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:43:03.791672       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1004 03:43:10.172857       1 handler_proxy.go:102] no RequestInfo found in the context
	E1004 03:43:10.172945       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 03:43:10.172958       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 03:43:41.354955       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:43:41.355010       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:43:41.355019       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1004 03:44:20.310729       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:44:20.310927       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:44:20.311017       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1004 03:44:55.519565       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:44:55.519618       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:44:55.519631       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1004 03:45:08.905607       1 handler_proxy.go:102] no RequestInfo found in the context
	E1004 03:45:08.905857       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 03:45:08.905873       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 03:45:34.464439       1 client.go:360] parsed scheme: "passthrough"
	I1004 03:45:34.464487       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1004 03:45:34.464495       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [82b8e7498dd882ec676ce6d369194f9c7ce4d6c8fd44bd1a4efa6b9e965d9c1f] <==
	E1004 03:41:27.468111       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:41:30.970142       1 request.go:655] Throttling request took 1.048045642s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W1004 03:41:31.821592       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:41:57.969899       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:42:03.472160       1 request.go:655] Throttling request took 1.048151808s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1004 03:42:04.323662       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:42:28.471757       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:42:35.974006       1 request.go:655] Throttling request took 1.048332333s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W1004 03:42:36.825584       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:42:58.973782       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:43:08.476096       1 request.go:655] Throttling request took 1.048508271s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W1004 03:43:09.327558       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:43:29.475899       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:43:40.978110       1 request.go:655] Throttling request took 1.048513578s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W1004 03:43:41.829562       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:43:59.977684       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:44:13.480150       1 request.go:655] Throttling request took 1.048376753s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W1004 03:44:14.331595       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:44:30.479582       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:44:45.981909       1 request.go:655] Throttling request took 1.048294196s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W1004 03:44:46.833499       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:45:00.981988       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1004 03:45:18.484068       1 request.go:655] Throttling request took 1.048520551s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W1004 03:45:19.335693       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 03:45:31.483928       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [f0343958cd2a62f0bc9fa707bef4553a3c44375d07b13d0c06f7728977fda287] <==
	I1004 03:37:20.385199       1 range_allocator.go:373] Set node old-k8s-version-445570 PodCIDR to [10.244.0.0/24]
	I1004 03:37:20.429519       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-445570" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1004 03:37:20.438333       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1004 03:37:20.471787       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tm9p4"
	I1004 03:37:20.476389       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-56kgm"
	I1004 03:37:20.479141       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h7lvz"
	I1004 03:37:20.488881       1 shared_informer.go:247] Caches are synced for disruption 
	I1004 03:37:20.490276       1 disruption.go:339] Sending events to api server.
	I1004 03:37:20.490709       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1004 03:37:20.505188       1 shared_informer.go:247] Caches are synced for resource quota 
	I1004 03:37:20.517888       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-d25t5"
	I1004 03:37:20.568415       1 shared_informer.go:247] Caches are synced for job 
	E1004 03:37:20.599433       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"86526c89-dce6-4642-82a1-fb5043c17110", ResourceVersion:"265", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863609824, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001a2b000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001a2b020)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001a2b040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x400185d180), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a2b
060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a2b080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a2b0c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001886ba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d63b28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40003bfea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000ebe0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d63b78)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1004 03:37:20.600391       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"50b1b9f1-a11a-452d-8b49-db4d894c78ac", ResourceVersion:"277", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863609824, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001a2b120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001a2b140)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001a2b160), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a2b180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a2b1a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a2b1c0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a2b1e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a2b220)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001886c00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d63d98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40003bff80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000ec00)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d63df0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1004 03:37:20.705379       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1004 03:37:21.005590       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1004 03:37:21.016329       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1004 03:37:21.016357       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1004 03:37:21.256396       1 request.go:655] Throttling request took 1.051213342s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	I1004 03:37:22.013961       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1004 03:37:22.014885       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-h7lvz"
	I1004 03:37:22.055601       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I1004 03:37:22.055862       1 shared_informer.go:247] Caches are synced for resource quota 
	I1004 03:37:25.368141       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1004 03:39:25.511074       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [7a1106279c108c93fe80eb2d47bc2a3c26515d0ea17e555376f4142a50dc8e36] <==
	I1004 03:40:10.340134       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1004 03:40:10.340347       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1004 03:40:10.357401       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1004 03:40:10.357743       1 server_others.go:185] Using iptables Proxier.
	I1004 03:40:10.358092       1 server.go:650] Version: v1.20.0
	I1004 03:40:10.358877       1 config.go:315] Starting service config controller
	I1004 03:40:10.361205       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1004 03:40:10.361398       1 config.go:224] Starting endpoint slice config controller
	I1004 03:40:10.361803       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1004 03:40:10.461386       1 shared_informer.go:247] Caches are synced for service config 
	I1004 03:40:10.462027       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [db56e60ccb26a4841473896cf63de198a919c74df5b3e1845b2a4f73cce7d321] <==
	I1004 03:37:21.524431       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1004 03:37:21.524555       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1004 03:37:21.562571       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1004 03:37:21.562663       1 server_others.go:185] Using iptables Proxier.
	I1004 03:37:21.562926       1 server.go:650] Version: v1.20.0
	I1004 03:37:21.563404       1 config.go:315] Starting service config controller
	I1004 03:37:21.563424       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1004 03:37:21.564607       1 config.go:224] Starting endpoint slice config controller
	I1004 03:37:21.564622       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1004 03:37:21.664408       1 shared_informer.go:247] Caches are synced for service config 
	I1004 03:37:21.664729       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [1f76481f61cbe7daf82f893f0b6981b679e5a69b4198f0649d9874f825189c23] <==
	W1004 03:37:01.309377       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:37:01.309592       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:37:01.309696       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:37:01.309781       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:37:01.368609       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1004 03:37:01.368877       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:37:01.368981       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:37:01.369091       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1004 03:37:01.384977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:37:01.385335       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 03:37:01.385572       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 03:37:01.385809       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 03:37:01.386042       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:37:01.386394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:37:01.386628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:37:01.386934       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 03:37:01.387159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:37:01.388475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 03:37:01.389234       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 03:37:01.403709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 03:37:02.250814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 03:37:02.348236       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:37:02.421426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 03:37:02.525122       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1004 03:37:02.969199       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [fd61154c785ee669b3a620fd034d912e5b6005c06a250bcf7606b989ca283b91] <==
	I1004 03:40:00.394931       1 serving.go:331] Generated self-signed cert in-memory
	W1004 03:40:07.808972       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:40:07.809021       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:40:07.809037       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:40:07.809043       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:40:07.964779       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1004 03:40:07.964887       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:40:07.964903       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:40:07.964917       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1004 03:40:08.068507       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 04 03:44:04 old-k8s-version-445570 kubelet[671]: E1004 03:44:04.337451     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: I1004 03:44:15.337300     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: E1004 03:44:15.338470     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:44:15 old-k8s-version-445570 kubelet[671]: E1004 03:44:15.338490     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:44:26 old-k8s-version-445570 kubelet[671]: E1004 03:44:26.338825     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:44:30 old-k8s-version-445570 kubelet[671]: I1004 03:44:30.337206     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:44:30 old-k8s-version-445570 kubelet[671]: E1004 03:44:30.337556     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:44:37 old-k8s-version-445570 kubelet[671]: E1004 03:44:37.337773     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:44:42 old-k8s-version-445570 kubelet[671]: I1004 03:44:42.337367     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:44:42 old-k8s-version-445570 kubelet[671]: E1004 03:44:42.337758     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:44:48 old-k8s-version-445570 kubelet[671]: E1004 03:44:48.337763     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:44:55 old-k8s-version-445570 kubelet[671]: I1004 03:44:55.340528     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:44:55 old-k8s-version-445570 kubelet[671]: E1004 03:44:55.342140     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:45:00 old-k8s-version-445570 kubelet[671]: E1004 03:45:00.338647     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: I1004 03:45:10.337041     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:45:10 old-k8s-version-445570 kubelet[671]: E1004 03:45:10.337424     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:45:13 old-k8s-version-445570 kubelet[671]: E1004 03:45:13.337752     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: I1004 03:45:21.337788     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:45:21 old-k8s-version-445570 kubelet[671]: E1004 03:45:21.338227     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:45:27 old-k8s-version-445570 kubelet[671]: E1004 03:45:27.339413     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:45:35 old-k8s-version-445570 kubelet[671]: I1004 03:45:35.337231     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:45:35 old-k8s-version-445570 kubelet[671]: E1004 03:45:35.337992     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	Oct 04 03:45:42 old-k8s-version-445570 kubelet[671]: E1004 03:45:42.338430     671 pod_workers.go:191] Error syncing pod 645bea79-708e-4fa4-b469-d493c518b4ac ("metrics-server-9975d5f86-7vbmk_kube-system(645bea79-708e-4fa4-b469-d493c518b4ac)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 03:45:47 old-k8s-version-445570 kubelet[671]: I1004 03:45:47.337094     671 scope.go:95] [topologymanager] RemoveContainer - Container ID: 541099ab654adffd4369c8a8d599dd601bc9f82430e011116576d472bc561bac
	Oct 04 03:45:47 old-k8s-version-445570 kubelet[671]: E1004 03:45:47.337464     671 pod_workers.go:191] Error syncing pod c3a92bfd-530b-4a7c-9e24-b8c20a20cdba ("dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-brtqr_kubernetes-dashboard(c3a92bfd-530b-4a7c-9e24-b8c20a20cdba)"
	
	
	==> kubernetes-dashboard [e25a99d73dc123b7ad1337a13bac763678ba647ef7414e06362277eb1ceaf920] <==
	2024/10/04 03:40:31 Using namespace: kubernetes-dashboard
	2024/10/04 03:40:31 Using in-cluster config to connect to apiserver
	2024/10/04 03:40:31 Using secret token for csrf signing
	2024/10/04 03:40:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/04 03:40:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/04 03:40:31 Successful initial request to the apiserver, version: v1.20.0
	2024/10/04 03:40:31 Generating JWE encryption key
	2024/10/04 03:40:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/04 03:40:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/04 03:40:31 Initializing JWE encryption key from synchronized object
	2024/10/04 03:40:31 Creating in-cluster Sidecar client
	2024/10/04 03:40:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:40:31 Serving insecurely on HTTP port: 9090
	2024/10/04 03:41:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:41:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:42:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:42:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:43:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:43:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:44:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:44:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:45:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:45:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/04 03:40:31 Starting overwatch
	
	
	==> storage-provisioner [04caef5abf18eb64fc69564f2f195bf5c77f3dfcdbdf6c9a8ea68e37d9d094bf] <==
	I1004 03:40:11.888341       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:40:11.911904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:40:11.911974       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:40:29.371797       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:40:29.371996       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-445570_3f5faf02-0708-4f45-8be9-0032a8c0f8b1!
	I1004 03:40:29.381137       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62c8eb01-4a4a-4bde-81f4-fdc0db4e0eaa", APIVersion:"v1", ResourceVersion:"798", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-445570_3f5faf02-0708-4f45-8be9-0032a8c0f8b1 became leader
	I1004 03:40:29.472224       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-445570_3f5faf02-0708-4f45-8be9-0032a8c0f8b1!
	
	
	==> storage-provisioner [b199a132a65232987bd2f9d36ca0edafd86724a43cd8b1d00bcce04f1b95be77] <==
	I1004 03:37:22.561675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:37:22.578344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:37:22.578395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:37:22.590076       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:37:22.590267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-445570_df75ff9a-585a-4488-a2d0-c93e68d8101d!
	I1004 03:37:22.591036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62c8eb01-4a4a-4bde-81f4-fdc0db4e0eaa", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-445570_df75ff9a-585a-4488-a2d0-c93e68d8101d became leader
	I1004 03:37:22.691412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-445570_df75ff9a-585a-4488-a2d0-c93e68d8101d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-445570 -n old-k8s-version-445570
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-445570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-7vbmk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-445570 describe pod metrics-server-9975d5f86-7vbmk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-445570 describe pod metrics-server-9975d5f86-7vbmk: exit status 1 (102.053526ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-7vbmk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-445570 describe pod metrics-server-9975d5f86-7vbmk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.72s)

                                                
                                    

Test pass (300/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.35
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.15
18 TestDownloadOnly/v1.31.1/DeleteAll 0.25
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 216.35
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/PullSecret 10.87
34 TestAddons/parallel/Registry 17.87
35 TestAddons/parallel/Ingress 19.27
36 TestAddons/parallel/InspektorGadget 11.85
37 TestAddons/parallel/Logviewer 6.63
38 TestAddons/parallel/MetricsServer 6.77
40 TestAddons/parallel/CSI 58.46
41 TestAddons/parallel/Headlamp 15.94
42 TestAddons/parallel/CloudSpanner 6.58
43 TestAddons/parallel/LocalPath 54.32
44 TestAddons/parallel/NvidiaDevicePlugin 5.59
45 TestAddons/parallel/Yakd 11.81
46 TestAddons/StoppedEnableDisable 12.21
47 TestCertOptions 39
48 TestCertExpiration 235.34
50 TestForceSystemdFlag 31.88
51 TestForceSystemdEnv 38.97
52 TestDockerEnvContainerd 44.54
57 TestErrorSpam/setup 28.86
58 TestErrorSpam/start 0.73
59 TestErrorSpam/status 1.02
60 TestErrorSpam/pause 1.74
61 TestErrorSpam/unpause 1.85
62 TestErrorSpam/stop 1.66
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 85.93
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 6.25
69 TestFunctional/serial/KubeContext 0.06
70 TestFunctional/serial/KubectlGetPods 0.09
73 TestFunctional/serial/CacheCmd/cache/add_remote 4.07
74 TestFunctional/serial/CacheCmd/cache/add_local 1.29
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
76 TestFunctional/serial/CacheCmd/cache/list 0.06
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
78 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
79 TestFunctional/serial/CacheCmd/cache/delete 0.1
80 TestFunctional/serial/MinikubeKubectlCmd 0.14
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.25
82 TestFunctional/serial/ExtraConfig 48.25
83 TestFunctional/serial/ComponentHealth 0.1
84 TestFunctional/serial/LogsCmd 1.71
85 TestFunctional/serial/LogsFileCmd 1.68
86 TestFunctional/serial/InvalidService 4.59
88 TestFunctional/parallel/ConfigCmd 0.48
89 TestFunctional/parallel/DashboardCmd 8.58
90 TestFunctional/parallel/DryRun 0.42
91 TestFunctional/parallel/InternationalLanguage 0.25
92 TestFunctional/parallel/StatusCmd 1.22
96 TestFunctional/parallel/ServiceCmdConnect 10.69
97 TestFunctional/parallel/AddonsCmd 0.2
98 TestFunctional/parallel/PersistentVolumeClaim 26.18
100 TestFunctional/parallel/SSHCmd 0.76
101 TestFunctional/parallel/CpCmd 2.3
103 TestFunctional/parallel/FileSync 0.3
104 TestFunctional/parallel/CertSync 2.05
108 TestFunctional/parallel/NodeLabels 0.25
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
112 TestFunctional/parallel/License 0.37
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
126 TestFunctional/parallel/ServiceCmd/List 0.61
127 TestFunctional/parallel/ProfileCmd/profile_list 0.47
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
131 TestFunctional/parallel/MountCmd/any-port 8.35
132 TestFunctional/parallel/ServiceCmd/Format 0.46
133 TestFunctional/parallel/ServiceCmd/URL 0.42
134 TestFunctional/parallel/MountCmd/specific-port 2.17
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
136 TestFunctional/parallel/Version/short 0.08
137 TestFunctional/parallel/Version/components 1.34
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
143 TestFunctional/parallel/ImageCommands/Setup 0.79
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.26
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 112.19
161 TestMultiControlPlane/serial/DeployApp 32.79
162 TestMultiControlPlane/serial/PingHostFromPods 1.62
163 TestMultiControlPlane/serial/AddWorkerNode 23.65
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
166 TestMultiControlPlane/serial/CopyFile 18.99
167 TestMultiControlPlane/serial/StopSecondaryNode 12.92
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
169 TestMultiControlPlane/serial/RestartSecondaryNode 18.61
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.07
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 144.01
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.96
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
174 TestMultiControlPlane/serial/StopCluster 36.1
175 TestMultiControlPlane/serial/RestartCluster 79.35
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
177 TestMultiControlPlane/serial/AddSecondaryNode 42.95
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
182 TestJSONOutput/start/Command 47.7
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.75
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.69
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.81
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
207 TestKicCustomNetwork/create_custom_network 37.55
208 TestKicCustomNetwork/use_default_bridge_network 32.82
209 TestKicExistingNetwork 32.02
210 TestKicCustomSubnet 30.85
211 TestKicStaticIP 33.13
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 68.7
216 TestMountStart/serial/StartWithMountFirst 6.47
217 TestMountStart/serial/VerifyMountFirst 0.26
218 TestMountStart/serial/StartWithMountSecond 6.64
219 TestMountStart/serial/VerifyMountSecond 0.25
220 TestMountStart/serial/DeleteFirst 1.62
221 TestMountStart/serial/VerifyMountPostDelete 0.25
222 TestMountStart/serial/Stop 1.22
223 TestMountStart/serial/RestartStopped 8.87
224 TestMountStart/serial/VerifyMountPostStop 0.26
227 TestMultiNode/serial/FreshStart2Nodes 104.97
228 TestMultiNode/serial/DeployApp2Nodes 15.87
229 TestMultiNode/serial/PingHostFrom2Pods 0.96
230 TestMultiNode/serial/AddNode 18.13
231 TestMultiNode/serial/MultiNodeLabels 0.11
232 TestMultiNode/serial/ProfileList 0.65
233 TestMultiNode/serial/CopyFile 10.05
234 TestMultiNode/serial/StopNode 2.32
235 TestMultiNode/serial/StartAfterStop 9.8
236 TestMultiNode/serial/RestartKeepsNodes 79.39
237 TestMultiNode/serial/DeleteNode 5.31
238 TestMultiNode/serial/StopMultiNode 24
239 TestMultiNode/serial/RestartMultiNode 56.29
240 TestMultiNode/serial/ValidateNameConflict 35.51
245 TestPreload 116.8
247 TestScheduledStopUnix 107.62
250 TestInsufficientStorage 10.31
251 TestRunningBinaryUpgrade 77.76
253 TestKubernetesUpgrade 354.01
254 TestMissingContainerUpgrade 171.5
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
257 TestNoKubernetes/serial/StartWithK8s 37.45
258 TestNoKubernetes/serial/StartWithStopK8s 21.06
259 TestNoKubernetes/serial/Start 5.63
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
261 TestNoKubernetes/serial/ProfileList 0.97
262 TestNoKubernetes/serial/Stop 1.21
263 TestNoKubernetes/serial/StartNoArgs 6.72
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
265 TestStoppedBinaryUpgrade/Setup 0.78
266 TestStoppedBinaryUpgrade/Upgrade 130.84
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
276 TestPause/serial/Start 92.12
284 TestNetworkPlugins/group/false 3.74
288 TestPause/serial/SecondStartNoReconfiguration 8.37
289 TestPause/serial/Pause 0.95
290 TestPause/serial/VerifyStatus 0.39
291 TestPause/serial/Unpause 0.81
292 TestPause/serial/PauseAgain 1.13
293 TestPause/serial/DeletePaused 2.81
294 TestPause/serial/VerifyDeletedResources 0.47
296 TestStartStop/group/old-k8s-version/serial/FirstStart 173.52
297 TestStartStop/group/old-k8s-version/serial/DeployApp 11.02
299 TestStartStop/group/no-preload/serial/FirstStart 78
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.48
301 TestStartStop/group/old-k8s-version/serial/Stop 12.83
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
304 TestStartStop/group/no-preload/serial/DeployApp 8.45
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
306 TestStartStop/group/no-preload/serial/Stop 12.38
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/no-preload/serial/SecondStart 267.69
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.16
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
312 TestStartStop/group/no-preload/serial/Pause 3.34
314 TestStartStop/group/embed-certs/serial/FirstStart 108.56
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
318 TestStartStop/group/old-k8s-version/serial/Pause 3.6
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.1
321 TestStartStop/group/embed-certs/serial/DeployApp 8.36
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
323 TestStartStop/group/embed-certs/serial/Stop 12.42
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.34
328 TestStartStop/group/embed-certs/serial/SecondStart 274.63
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 294.64
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
334 TestStartStop/group/embed-certs/serial/Pause 3.12
336 TestStartStop/group/newest-cni/serial/FirstStart 35.85
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.04
341 TestNetworkPlugins/group/auto/Start 96.92
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
344 TestStartStop/group/newest-cni/serial/Stop 3.05
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
346 TestStartStop/group/newest-cni/serial/SecondStart 24.11
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
350 TestStartStop/group/newest-cni/serial/Pause 3.77
351 TestNetworkPlugins/group/custom-flannel/Start 54.98
352 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
353 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
354 TestNetworkPlugins/group/auto/KubeletFlags 0.3
355 TestNetworkPlugins/group/auto/NetCatPod 9.28
356 TestNetworkPlugins/group/custom-flannel/DNS 0.18
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
359 TestNetworkPlugins/group/auto/DNS 0.18
360 TestNetworkPlugins/group/auto/Localhost 0.15
361 TestNetworkPlugins/group/auto/HairPin 0.16
362 TestNetworkPlugins/group/kindnet/Start 100.69
363 TestNetworkPlugins/group/flannel/Start 56.37
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
366 TestNetworkPlugins/group/flannel/NetCatPod 9.27
367 TestNetworkPlugins/group/flannel/DNS 0.2
368 TestNetworkPlugins/group/flannel/Localhost 0.16
369 TestNetworkPlugins/group/flannel/HairPin 0.16
370 TestNetworkPlugins/group/enable-default-cni/Start 43.15
371 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
372 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
373 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
374 TestNetworkPlugins/group/kindnet/DNS 0.2
375 TestNetworkPlugins/group/kindnet/Localhost 0.15
376 TestNetworkPlugins/group/kindnet/HairPin 0.18
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
379 TestNetworkPlugins/group/bridge/Start 54.28
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
383 TestNetworkPlugins/group/calico/Start 68.87
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
385 TestNetworkPlugins/group/bridge/NetCatPod 12.34
386 TestNetworkPlugins/group/bridge/DNS 0.27
387 TestNetworkPlugins/group/bridge/Localhost 0.24
388 TestNetworkPlugins/group/bridge/HairPin 0.16
389 TestNetworkPlugins/group/calico/ControllerPod 6.01
390 TestNetworkPlugins/group/calico/KubeletFlags 0.29
391 TestNetworkPlugins/group/calico/NetCatPod 8.29
392 TestNetworkPlugins/group/calico/DNS 0.2
393 TestNetworkPlugins/group/calico/Localhost 0.79
394 TestNetworkPlugins/group/calico/HairPin 0.19
x
+
TestDownloadOnly/v1.20.0/json-events (6.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-188351 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-188351 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.346825714s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1004 02:47:54.521565 1154813 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1004 02:47:54.521649 1154813 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-188351
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-188351: exit status 85 (71.313663ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-188351 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |          |
	|         | -p download-only-188351        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:47:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:47:48.212405 1154818 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:47:48.212544 1154818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:48.212556 1154818 out.go:358] Setting ErrFile to fd 2...
	I1004 02:47:48.212562 1154818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:48.212808 1154818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	W1004 02:47:48.212943 1154818 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19546-1149434/.minikube/config/config.json: open /home/jenkins/minikube-integration/19546-1149434/.minikube/config/config.json: no such file or directory
	I1004 02:47:48.213326 1154818 out.go:352] Setting JSON to true
	I1004 02:47:48.214219 1154818 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23417,"bootTime":1727986652,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 02:47:48.214288 1154818 start.go:139] virtualization:  
	I1004 02:47:48.217306 1154818 out.go:97] [download-only-188351] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1004 02:47:48.217509 1154818 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball: no such file or directory
	I1004 02:47:48.217545 1154818 notify.go:220] Checking for updates...
	I1004 02:47:48.219830 1154818 out.go:169] MINIKUBE_LOCATION=19546
	I1004 02:47:48.221913 1154818 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:47:48.223884 1154818 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 02:47:48.225845 1154818 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 02:47:48.227786 1154818 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1004 02:47:48.231668 1154818 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 02:47:48.231956 1154818 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:47:48.259845 1154818 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:47:48.259948 1154818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:48.317146 1154818 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 02:47:48.307922419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:48.317263 1154818 docker.go:318] overlay module found
	I1004 02:47:48.319446 1154818 out.go:97] Using the docker driver based on user configuration
	I1004 02:47:48.319475 1154818 start.go:297] selected driver: docker
	I1004 02:47:48.319482 1154818 start.go:901] validating driver "docker" against <nil>
	I1004 02:47:48.319599 1154818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:48.361798 1154818 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 02:47:48.352925057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:48.362011 1154818 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:47:48.362314 1154818 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1004 02:47:48.362505 1154818 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 02:47:48.365129 1154818 out.go:169] Using Docker driver with root privileges
	I1004 02:47:48.366808 1154818 cni.go:84] Creating CNI manager for ""
	I1004 02:47:48.366864 1154818 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 02:47:48.366875 1154818 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:47:48.366950 1154818 start.go:340] cluster config:
	{Name:download-only-188351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-188351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:47:48.369147 1154818 out.go:97] Starting "download-only-188351" primary control-plane node in "download-only-188351" cluster
	I1004 02:47:48.369165 1154818 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1004 02:47:48.371180 1154818 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1004 02:47:48.371203 1154818 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1004 02:47:48.371351 1154818 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:47:48.386168 1154818 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:47:48.386812 1154818 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:47:48.386928 1154818 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:47:48.433272 1154818 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1004 02:47:48.433312 1154818 cache.go:56] Caching tarball of preloaded images
	I1004 02:47:48.434069 1154818 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1004 02:47:48.436639 1154818 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1004 02:47:48.436661 1154818 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1004 02:47:48.517269 1154818 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1004 02:47:52.820225 1154818 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1004 02:47:52.820342 1154818 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-188351 host does not exist
	  To start a cluster, run: "minikube start -p download-only-188351"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-188351
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-577482 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-577482 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.41947302s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1004 02:48:00.348415 1154813 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1004 02:48:00.348466 1154813 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-577482
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-577482: exit status 85 (152.914806ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-188351 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |                     |
	|         | -p download-only-188351        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC | 04 Oct 24 02:47 UTC |
	| delete  | -p download-only-188351        | download-only-188351 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC | 04 Oct 24 02:47 UTC |
	| start   | -o=json --download-only        | download-only-577482 | jenkins | v1.34.0 | 04 Oct 24 02:47 UTC |                     |
	|         | -p download-only-577482        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/04 02:47:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:47:54.970192 1155023 out.go:345] Setting OutFile to fd 1 ...
	I1004 02:47:54.970314 1155023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:54.970323 1155023 out.go:358] Setting ErrFile to fd 2...
	I1004 02:47:54.970328 1155023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 02:47:54.970567 1155023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 02:47:54.970961 1155023 out.go:352] Setting JSON to true
	I1004 02:47:54.971793 1155023 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23423,"bootTime":1727986652,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 02:47:54.971861 1155023 start.go:139] virtualization:  
	I1004 02:47:54.974035 1155023 out.go:97] [download-only-577482] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 02:47:54.974186 1155023 notify.go:220] Checking for updates...
	I1004 02:47:54.976052 1155023 out.go:169] MINIKUBE_LOCATION=19546
	I1004 02:47:54.977946 1155023 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:47:54.979582 1155023 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 02:47:54.981170 1155023 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 02:47:54.982895 1155023 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1004 02:47:54.986221 1155023 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 02:47:54.986578 1155023 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 02:47:55.024246 1155023 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 02:47:55.024420 1155023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:55.090070 1155023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-04 02:47:55.078635393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:55.090187 1155023 docker.go:318] overlay module found
	I1004 02:47:55.092103 1155023 out.go:97] Using the docker driver based on user configuration
	I1004 02:47:55.092142 1155023 start.go:297] selected driver: docker
	I1004 02:47:55.092150 1155023 start.go:901] validating driver "docker" against <nil>
	I1004 02:47:55.092271 1155023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 02:47:55.141264 1155023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-04 02:47:55.131832913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 02:47:55.141466 1155023 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1004 02:47:55.141729 1155023 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1004 02:47:55.141894 1155023 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 02:47:55.144215 1155023 out.go:169] Using Docker driver with root privileges
	I1004 02:47:55.146214 1155023 cni.go:84] Creating CNI manager for ""
	I1004 02:47:55.146285 1155023 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1004 02:47:55.146297 1155023 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:47:55.146379 1155023 start.go:340] cluster config:
	{Name:download-only-577482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-577482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 02:47:55.148402 1155023 out.go:97] Starting "download-only-577482" primary control-plane node in "download-only-577482" cluster
	I1004 02:47:55.148434 1155023 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1004 02:47:55.150166 1155023 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1004 02:47:55.150210 1155023 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 02:47:55.150316 1155023 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1004 02:47:55.166479 1155023 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1004 02:47:55.166627 1155023 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1004 02:47:55.166648 1155023 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1004 02:47:55.166654 1155023 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1004 02:47:55.166661 1155023 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1004 02:47:55.225387 1155023 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1004 02:47:55.225412 1155023 cache.go:56] Caching tarball of preloaded images
	I1004 02:47:55.225572 1155023 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 02:47:55.243117 1155023 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1004 02:47:55.243145 1155023 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1004 02:47:55.322101 1155023 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1004 02:47:58.486839 1155023 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1004 02:47:58.486985 1155023 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1004 02:47:59.408018 1155023 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1004 02:47:59.408525 1155023 profile.go:143] Saving config to /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/download-only-577482/config.json ...
	I1004 02:47:59.408577 1155023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/download-only-577482/config.json: {Name:mk442bfdf34f5f36d246c50aafb30c9355f94a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:47:59.409284 1155023 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1004 02:47:59.409905 1155023 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19546-1149434/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-577482 host does not exist
	  To start a cluster, run: "minikube start -p download-only-577482"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-577482
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I1004 02:48:01.694771 1154813 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-350152 --alsologtostderr --binary-mirror http://127.0.0.1:37997 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-350152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-350152
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-813566
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-813566: exit status 85 (67.71311ms)

                                                
                                                
-- stdout --
	* Profile "addons-813566" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-813566"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:956: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-813566
addons_test.go:956: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-813566: exit status 85 (62.200905ms)

                                                
                                                
-- stdout --
	* Profile "addons-813566" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-813566"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-813566 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-813566 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.348464421s)
--- PASS: TestAddons/Setup (216.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:570: (dbg) Run:  kubectl --context addons-813566 create ns new-namespace
addons_test.go:584: (dbg) Run:  kubectl --context addons-813566 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (10.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:615: (dbg) Run:  kubectl --context addons-813566 create -f testdata/busybox.yaml
addons_test.go:622: (dbg) Run:  kubectl --context addons-813566 create sa gcp-auth-test
addons_test.go:628: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c9723157-b833-4797-81e5-f63a3a049cdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c9723157-b833-4797-81e5-f63a3a049cdc] Running
addons_test.go:628: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 10.003936419s
addons_test.go:634: (dbg) Run:  kubectl --context addons-813566 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:646: (dbg) Run:  kubectl --context addons-813566 describe sa gcp-auth-test
addons_test.go:660: (dbg) Run:  kubectl --context addons-813566 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:684: (dbg) Run:  kubectl --context addons-813566 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (10.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:322: registry stabilized in 6.819515ms
addons_test.go:324: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tx7r8" [08e6728e-1615-49cf-90c5-7adb914e944a] Running
addons_test.go:324: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005798725s
addons_test.go:327: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gssnr" [fd499071-9287-4378-8b68-836755ad3000] Running
addons_test.go:327: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.027020789s
addons_test.go:332: (dbg) Run:  kubectl --context addons-813566 delete po -l run=registry-test --now
addons_test.go:337: (dbg) Run:  kubectl --context addons-813566 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:337: (dbg) Done: kubectl --context addons-813566 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.610033509s)
addons_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 ip
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:208: (dbg) Run:  kubectl --context addons-813566 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:233: (dbg) Run:  kubectl --context addons-813566 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:246: (dbg) Run:  kubectl --context addons-813566 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cf8ce7ae-a91c-45a8-acbf-6c85a4877d7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cf8ce7ae-a91c-45a8-acbf-6c85a4877d7b] Running
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003188389s
I1004 02:57:11.487214 1154813 kapi.go:150] Service nginx in namespace default found.
addons_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:287: (dbg) Run:  kubectl --context addons-813566 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:292: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 ip
addons_test.go:298: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable ingress-dns --alsologtostderr -v=1: (1.856218475s)
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable ingress --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable ingress --alsologtostderr -v=1: (7.770523371s)
--- PASS: TestAddons/parallel/Ingress (19.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4vhfk" [9e4a91c4-51bd-4f8b-bc2a-f5ff6dfa7363] Running
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004081774s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable inspektor-gadget --alsologtostderr -v=1: (5.849661582s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/Logviewer (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Logviewer
=== PAUSE TestAddons/parallel/Logviewer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Logviewer
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: waiting 8m0s for pods matching "app=logviewer" in namespace "kube-system" ...
helpers_test.go:344: "logviewer-7c79c8bcc9-wks5q" [089aa2eb-91a9-4e35-af2b-4913fed9b821] Running
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: app=logviewer healthy within 6.003500918s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable logviewer --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Logviewer (6.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:395: metrics-server stabilized in 2.70396ms
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-p2qpr" [f3315201-b874-46dd-b7f9-914da49dbb43] Running
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00491692s
addons_test.go:403: (dbg) Run:  kubectl --context addons-813566 top pods -n kube-system
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1004 02:55:41.875897 1154813 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1004 02:55:41.880524 1154813 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1004 02:55:41.880557 1154813 kapi.go:107] duration metric: took 8.056398ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:489: csi-hostpath-driver pods stabilized in 8.066557ms
addons_test.go:492: (dbg) Run:  kubectl --context addons-813566 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:497: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/10/04 02:55:47 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:502: (dbg) Run:  kubectl --context addons-813566 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:507: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [390645e5-9071-4456-b2e0-54b4d57551f8] Pending
helpers_test.go:344: "task-pv-pod" [390645e5-9071-4456-b2e0-54b4d57551f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [390645e5-9071-4456-b2e0-54b4d57551f8] Running
addons_test.go:507: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004822068s
addons_test.go:512: (dbg) Run:  kubectl --context addons-813566 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:517: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-813566 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-813566 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:522: (dbg) Run:  kubectl --context addons-813566 delete pod task-pv-pod
addons_test.go:528: (dbg) Run:  kubectl --context addons-813566 delete pvc hpvc
addons_test.go:534: (dbg) Run:  kubectl --context addons-813566 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-813566 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:549: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b8a29fc7-7500-4092-95ab-40fa4a8abcfd] Pending
helpers_test.go:344: "task-pv-pod-restore" [b8a29fc7-7500-4092-95ab-40fa4a8abcfd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b8a29fc7-7500-4092-95ab-40fa4a8abcfd] Running
addons_test.go:549: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003082554s
addons_test.go:554: (dbg) Run:  kubectl --context addons-813566 delete pod task-pv-pod-restore
addons_test.go:554: (dbg) Done: kubectl --context addons-813566 delete pod task-pv-pod-restore: (1.085914687s)
addons_test.go:558: (dbg) Run:  kubectl --context addons-813566 delete pvc hpvc-restore
addons_test.go:562: (dbg) Run:  kubectl --context addons-813566 delete volumesnapshot new-snapshot-demo
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable volumesnapshots --alsologtostderr -v=1: (1.055124211s)
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.796960742s)
--- PASS: TestAddons/parallel/CSI (58.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:744: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-813566 --alsologtostderr -v=1
addons_test.go:744: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-813566 --alsologtostderr -v=1: (1.194099554s)
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-tlnt2" [83311cc5-3c64-4582-a008-ec28ac8113c6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-tlnt2" [83311cc5-3c64-4582-a008-ec28ac8113c6] Running
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004787953s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable headlamp --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable headlamp --alsologtostderr -v=1: (5.740291612s)
--- PASS: TestAddons/parallel/Headlamp (15.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-7hznl" [c57d9dc9-8c39-4f39-80f3-ac97a35cdf4b] Running
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004389s
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:894: (dbg) Run:  kubectl --context addons-813566 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:900: (dbg) Run:  kubectl --context addons-813566 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:904: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [49be4689-c2ae-4fe9-a904-bf7e00f49e3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [49be4689-c2ae-4fe9-a904-bf7e00f49e3e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [49be4689-c2ae-4fe9-a904-bf7e00f49e3e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004363406s
addons_test.go:912: (dbg) Run:  kubectl --context addons-813566 get pvc test-pvc -o=json
addons_test.go:921: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 ssh "cat /opt/local-path-provisioner/pvc-889f1778-d23b-4181-bbe1-76904274a6c3_default_test-pvc/file1"
addons_test.go:933: (dbg) Run:  kubectl --context addons-813566 delete pod test-local-path
addons_test.go:937: (dbg) Run:  kubectl --context addons-813566 delete pvc test-pvc
addons_test.go:990: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.435206848s)
--- PASS: TestAddons/parallel/LocalPath (54.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xqrwz" [8cc8b509-3784-424e-9f4b-54bf488b54d9] Running
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006997174s
addons_test.go:972: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-813566
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-smhk2" [c8fc9015-9510-4a16-8258-32885f17982b] Running
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004297236s
addons_test.go:984: (dbg) Run:  out/minikube-linux-arm64 -p addons-813566 addons disable yakd --alsologtostderr -v=1
addons_test.go:984: (dbg) Done: out/minikube-linux-arm64 -p addons-813566 addons disable yakd --alsologtostderr -v=1: (5.808986997s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-813566
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-813566: (11.963677046s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-813566
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-813566
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-813566
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-884103 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-884103 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.403555045s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-884103 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-884103 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-884103 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-884103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-884103
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-884103: (1.969277843s)
--- PASS: TestCertOptions (39.00s)

                                                
                                    
x
+
TestCertExpiration (235.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-671389 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-671389 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (44.064878483s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-671389 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-671389 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.951894679s)
helpers_test.go:175: Cleaning up "cert-expiration-671389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-671389
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-671389: (2.322840982s)
--- PASS: TestCertExpiration (235.34s)

                                                
                                    
x
+
TestForceSystemdFlag (31.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-194016 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1004 03:34:41.776451 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-194016 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.642946836s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-194016 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-194016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-194016
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-194016: (1.963462288s)
--- PASS: TestForceSystemdFlag (31.88s)

                                                
                                    
x
+
TestForceSystemdEnv (38.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-103536 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-103536 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.30553897s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-103536 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-103536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-103536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-103536: (2.279280711s)
--- PASS: TestForceSystemdEnv (38.97s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.54s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-874342 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-874342 --driver=docker  --container-runtime=containerd: (28.985416581s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-874342"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-B7MQmI4YjJ3m/agent.1177799" SSH_AGENT_PID="1177800" DOCKER_HOST=ssh://docker@127.0.0.1:34257 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-B7MQmI4YjJ3m/agent.1177799" SSH_AGENT_PID="1177800" DOCKER_HOST=ssh://docker@127.0.0.1:34257 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-B7MQmI4YjJ3m/agent.1177799" SSH_AGENT_PID="1177800" DOCKER_HOST=ssh://docker@127.0.0.1:34257 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.270193978s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-B7MQmI4YjJ3m/agent.1177799" SSH_AGENT_PID="1177800" DOCKER_HOST=ssh://docker@127.0.0.1:34257 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-874342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-874342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-874342: (1.937112789s)
--- PASS: TestDockerEnvContainerd (44.54s)

                                                
                                    
x
+
TestErrorSpam/setup (28.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-719752 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-719752 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-719752 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-719752 --driver=docker  --container-runtime=containerd: (28.860625571s)
--- PASS: TestErrorSpam/setup (28.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 stop: (1.288154571s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719752 --log_dir /tmp/nospam-719752 stop
--- PASS: TestErrorSpam/stop (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19546-1149434/.minikube/files/etc/test/nested/copy/1154813/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421253 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-421253 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m25.925500091s)
--- PASS: TestFunctional/serial/StartWithProxy (85.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1004 03:00:30.482395 1154813 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421253 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-421253 --alsologtostderr -v=8: (6.244033784s)
functional_test.go:663: soft start took 6.246787713s for "functional-421253" cluster.
I1004 03:00:36.726778 1154813 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-421253 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 cache add registry.k8s.io/pause:3.1: (1.461386125s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 cache add registry.k8s.io/pause:3.3: (1.357740958s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 cache add registry.k8s.io/pause:latest: (1.248818635s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-421253 /tmp/TestFunctionalserialCacheCmdcacheadd_local1119946534/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cache add minikube-local-cache-test:functional-421253
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cache delete minikube-local-cache-test:functional-421253
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-421253
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.649905ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 cache reload: (1.126904653s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 kubectl -- --context functional-421253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-421253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.25s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-421253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.253861963s)
functional_test.go:761: restart took 48.253980378s for "functional-421253" cluster.
I1004 03:01:33.454147 1154813 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (48.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-421253 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 logs: (1.714232119s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 logs --file /tmp/TestFunctionalserialLogsFileCmd3571889342/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 logs --file /tmp/TestFunctionalserialLogsFileCmd3571889342/001/logs.txt: (1.674282403s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-421253 apply -f testdata/invalidsvc.yaml
E1004 03:01:38.696429 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:38.702909 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:38.714328 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:38.735703 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:38.777057 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:38.858565 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:39.020164 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:39.341967 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:01:39.984085 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-421253
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-421253: exit status 115 (624.405134ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32456 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-421253 delete -f testdata/invalidsvc.yaml
E1004 03:01:41.266114 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 config get cpus: exit status 14 (90.074067ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 config get cpus: exit status 14 (82.532323ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-421253 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-421253 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1192520: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-421253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (175.312065ms)

                                                
                                                
-- stdout --
	* [functional-421253] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:02:14.254555 1192272 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:02:14.254756 1192272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:02:14.254765 1192272 out.go:358] Setting ErrFile to fd 2...
	I1004 03:02:14.254771 1192272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:02:14.255070 1192272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:02:14.255451 1192272 out.go:352] Setting JSON to false
	I1004 03:02:14.256491 1192272 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24283,"bootTime":1727986652,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 03:02:14.256563 1192272 start.go:139] virtualization:  
	I1004 03:02:14.259650 1192272 out.go:177] * [functional-421253] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:02:14.262625 1192272 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:02:14.262688 1192272 notify.go:220] Checking for updates...
	I1004 03:02:14.267135 1192272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:02:14.270148 1192272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:02:14.272317 1192272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 03:02:14.274515 1192272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:02:14.277398 1192272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:02:14.280420 1192272 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:02:14.280970 1192272 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:02:14.311191 1192272 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:02:14.311306 1192272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:02:14.364414 1192272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:02:14.353967274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:02:14.364526 1192272 docker.go:318] overlay module found
	I1004 03:02:14.367016 1192272 out.go:177] * Using the docker driver based on existing profile
	I1004 03:02:14.368746 1192272 start.go:297] selected driver: docker
	I1004 03:02:14.368766 1192272 start.go:901] validating driver "docker" against &{Name:functional-421253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-421253 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:02:14.368879 1192272 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:02:14.371418 1192272 out.go:201] 
	W1004 03:02:14.373521 1192272 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1004 03:02:14.375512 1192272 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421253 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-421253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-421253 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (245.210332ms)

                                                
                                                
-- stdout --
	* [functional-421253] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:02:14.012439 1192164 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:02:14.012702 1192164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:02:14.012745 1192164 out.go:358] Setting ErrFile to fd 2...
	I1004 03:02:14.012766 1192164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:02:14.013246 1192164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:02:14.013853 1192164 out.go:352] Setting JSON to false
	I1004 03:02:14.015561 1192164 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24282,"bootTime":1727986652,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 03:02:14.015703 1192164 start.go:139] virtualization:  
	I1004 03:02:14.018778 1192164 out.go:177] * [functional-421253] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1004 03:02:14.024061 1192164 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:02:14.024125 1192164 notify.go:220] Checking for updates...
	I1004 03:02:14.026653 1192164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:02:14.030660 1192164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:02:14.032810 1192164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 03:02:14.037094 1192164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:02:14.042840 1192164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:02:14.045272 1192164 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:02:14.045788 1192164 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:02:14.085048 1192164 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:02:14.085165 1192164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:02:14.186541 1192164 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:02:14.175548124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:02:14.186652 1192164 docker.go:318] overlay module found
	I1004 03:02:14.189496 1192164 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1004 03:02:14.191925 1192164 start.go:297] selected driver: docker
	I1004 03:02:14.191941 1192164 start.go:901] validating driver "docker" against &{Name:functional-421253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-421253 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1004 03:02:14.192067 1192164 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:02:14.194749 1192164 out.go:201] 
	W1004 03:02:14.196731 1192164 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1004 03:02:14.198695 1192164 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-421253 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-421253 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-thdwg" [5b6a8e3d-409b-4e70-8ce1-65c1d03f9ff6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-thdwg" [5b6a8e3d-409b-4e70-8ce1-65c1d03f9ff6] Running
E1004 03:01:59.192083 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003597254s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30447
functional_test.go:1675: http://192.168.49.2:30447: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-thdwg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30447
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4d8cce97-bd45-42e1-b464-429b074834e9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003973513s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-421253 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-421253 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-421253 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-421253 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [829c3ad8-0414-4897-bee8-dc9e11c5a498] Pending
helpers_test.go:344: "sp-pod" [829c3ad8-0414-4897-bee8-dc9e11c5a498] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [829c3ad8-0414-4897-bee8-dc9e11c5a498] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003551766s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-421253 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-421253 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-421253 delete -f testdata/storage-provisioner/pod.yaml: (1.074897202s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-421253 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [796f2050-d26e-477a-a07a-3822a719335c] Pending
helpers_test.go:344: "sp-pod" [796f2050-d26e-477a-a07a-3822a719335c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004128296s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-421253 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh -n functional-421253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cp functional-421253:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1318502352/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh -n functional-421253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh -n functional-421253 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1154813/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /etc/test/nested/copy/1154813/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1154813.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /etc/ssl/certs/1154813.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1154813.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /usr/share/ca-certificates/1154813.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11548132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /etc/ssl/certs/11548132.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11548132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /usr/share/ca-certificates/11548132.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-421253 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh "sudo systemctl is-active docker": exit status 1 (332.341277ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh "sudo systemctl is-active crio": exit status 1 (292.102011ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421253 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421253 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-421253 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-421253 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1189691: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-421253 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-421253 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [816546fc-fbe6-4e92-8c27-73e95b18e221] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1004 03:01:43.828165 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [816546fc-fbe6-4e92-8c27-73e95b18e221] Running
E1004 03:01:48.949896 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004036181s
I1004 03:01:51.804065 1154813 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-421253 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.149.10 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-421253 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-421253 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-421253 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-cppp5" [4fecc88e-1a22-4f13-89e6-5d4e370bd111] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-cppp5" [4fecc88e-1a22-4f13-89e6-5d4e370bd111] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003872028s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "413.818531ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "58.400186ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 service list -o json
functional_test.go:1494: Took "595.783533ms" to run "out/minikube-linux-arm64 -p functional-421253 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "456.851703ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "76.741896ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31889
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdany-port2055804174/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728010931558483104" to /tmp/TestFunctionalparallelMountCmdany-port2055804174/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728010931558483104" to /tmp/TestFunctionalparallelMountCmdany-port2055804174/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728010931558483104" to /tmp/TestFunctionalparallelMountCmdany-port2055804174/001/test-1728010931558483104
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (441.255643ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:02:12.001978 1154813 retry.go:31] will retry after 394.096427ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 03:02 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 03:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 03:02 test-1728010931558483104
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh cat /mount-9p/test-1728010931558483104
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-421253 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5b910c43-4301-4609-9487-9ea49bdb260f] Pending
helpers_test.go:344: "busybox-mount" [5b910c43-4301-4609-9487-9ea49bdb260f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5b910c43-4301-4609-9487-9ea49bdb260f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5b910c43-4301-4609-9487-9ea49bdb260f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005045114s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-421253 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo umount -f /mount-9p"
E1004 03:02:19.681422 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdany-port2055804174/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31889
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdspecific-port444838818/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (509.968719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1004 03:02:20.418064 1154813 retry.go:31] will retry after 420.258908ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdspecific-port444838818/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh "sudo umount -f /mount-9p": exit status 1 (327.275786ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-421253 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdspecific-port444838818/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4078231049/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4078231049/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4078231049/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T" /mount1
2024/10/04 03:02:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T" /mount1: (1.069488449s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-421253 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4078231049/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4078231049/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-421253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4078231049/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 version -o=json --components: (1.338446093s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421253 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-421253
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-421253
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421253 image ls --format short --alsologtostderr:
I1004 03:02:31.177066 1195083 out.go:345] Setting OutFile to fd 1 ...
I1004 03:02:31.177265 1195083 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.177278 1195083 out.go:358] Setting ErrFile to fd 2...
I1004 03:02:31.177285 1195083 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.177549 1195083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
I1004 03:02:31.178203 1195083 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.178369 1195083 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.178956 1195083 cli_runner.go:164] Run: docker container inspect functional-421253 --format={{.State.Status}}
I1004 03:02:31.198502 1195083 ssh_runner.go:195] Run: systemctl --version
I1004 03:02:31.198552 1195083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421253
I1004 03:02:31.221265 1195083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34267 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/functional-421253/id_rsa Username:docker}
I1004 03:02:31.320740 1195083 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421253 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-421253  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-421253  | sha256:e9d1b2 | 993B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421253 image ls --format table --alsologtostderr:
I1004 03:02:31.800838 1195244 out.go:345] Setting OutFile to fd 1 ...
I1004 03:02:31.801010 1195244 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.801035 1195244 out.go:358] Setting ErrFile to fd 2...
I1004 03:02:31.801042 1195244 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.801365 1195244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
I1004 03:02:31.802093 1195244 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.802306 1195244 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.802831 1195244 cli_runner.go:164] Run: docker container inspect functional-421253 --format={{.State.Status}}
I1004 03:02:31.827403 1195244 ssh_runner.go:195] Run: systemctl --version
I1004 03:02:31.827473 1195244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421253
I1004 03:02:31.846899 1195244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34267 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/functional-421253/id_rsa Username:docker}
I1004 03:02:31.944935 1195244 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421253 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[
],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sh
a256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:e9d1b230191d5ac0456460a323a8882ae8b71197e2ec9f6e0f92e50aa1edd0e8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-421253"],"size":"993"},{"id":"sha256:577a23b5858b94
a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-421253"],"size":"2173567"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoD
igests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421253 image ls --format json --alsologtostderr:
I1004 03:02:31.529561 1195152 out.go:345] Setting OutFile to fd 1 ...
I1004 03:02:31.529758 1195152 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.529771 1195152 out.go:358] Setting ErrFile to fd 2...
I1004 03:02:31.529776 1195152 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.530132 1195152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
I1004 03:02:31.530887 1195152 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.531050 1195152 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.531596 1195152 cli_runner.go:164] Run: docker container inspect functional-421253 --format={{.State.Status}}
I1004 03:02:31.550278 1195152 ssh_runner.go:195] Run: systemctl --version
I1004 03:02:31.550356 1195152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421253
I1004 03:02:31.574853 1195152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34267 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/functional-421253/id_rsa Username:docker}
I1004 03:02:31.668775 1195152 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-421253 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-421253
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:e9d1b230191d5ac0456460a323a8882ae8b71197e2ec9f6e0f92e50aa1edd0e8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-421253
size: "993"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421253 image ls --format yaml --alsologtostderr:
I1004 03:02:31.179621 1195084 out.go:345] Setting OutFile to fd 1 ...
I1004 03:02:31.179760 1195084 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.179822 1195084 out.go:358] Setting ErrFile to fd 2...
I1004 03:02:31.179843 1195084 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.180080 1195084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
I1004 03:02:31.180763 1195084 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.180918 1195084 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.181433 1195084 cli_runner.go:164] Run: docker container inspect functional-421253 --format={{.State.Status}}
I1004 03:02:31.207068 1195084 ssh_runner.go:195] Run: systemctl --version
I1004 03:02:31.207250 1195084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421253
I1004 03:02:31.235460 1195084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34267 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/functional-421253/id_rsa Username:docker}
I1004 03:02:31.336305 1195084 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-421253 ssh pgrep buildkitd: exit status 1 (326.129831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image build -t localhost/my-image:functional-421253 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 image build -t localhost/my-image:functional-421253 testdata/build --alsologtostderr: (3.398731909s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-421253 image build -t localhost/my-image:functional-421253 testdata/build --alsologtostderr:
I1004 03:02:31.776874 1195239 out.go:345] Setting OutFile to fd 1 ...
I1004 03:02:31.777489 1195239 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.777505 1195239 out.go:358] Setting ErrFile to fd 2...
I1004 03:02:31.777510 1195239 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1004 03:02:31.777762 1195239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
I1004 03:02:31.780080 1195239 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.781272 1195239 config.go:182] Loaded profile config "functional-421253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1004 03:02:31.781915 1195239 cli_runner.go:164] Run: docker container inspect functional-421253 --format={{.State.Status}}
I1004 03:02:31.800735 1195239 ssh_runner.go:195] Run: systemctl --version
I1004 03:02:31.800784 1195239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421253
I1004 03:02:31.829440 1195239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34267 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/functional-421253/id_rsa Username:docker}
I1004 03:02:31.924782 1195239 build_images.go:161] Building image from path: /tmp/build.24619386.tar
I1004 03:02:31.924851 1195239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1004 03:02:31.934555 1195239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.24619386.tar
I1004 03:02:31.938092 1195239 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.24619386.tar: stat -c "%s %y" /var/lib/minikube/build/build.24619386.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.24619386.tar': No such file or directory
I1004 03:02:31.938127 1195239 ssh_runner.go:362] scp /tmp/build.24619386.tar --> /var/lib/minikube/build/build.24619386.tar (3072 bytes)
I1004 03:02:31.965474 1195239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.24619386
I1004 03:02:31.978963 1195239 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.24619386 -xf /var/lib/minikube/build/build.24619386.tar
I1004 03:02:31.990502 1195239 containerd.go:394] Building image: /var/lib/minikube/build/build.24619386
I1004 03:02:31.990588 1195239 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.24619386 --local dockerfile=/var/lib/minikube/build/build.24619386 --output type=image,name=localhost/my-image:functional-421253
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:119f87e56f11e3bd6dd0df2e7fedb8819781b54a921eba8914d98163fd09d48d
#8 exporting manifest sha256:119f87e56f11e3bd6dd0df2e7fedb8819781b54a921eba8914d98163fd09d48d 0.0s done
#8 exporting config sha256:38cda6118b650696b4d407e784ae7921af903da5ff3a0bee73319a0c0630360a 0.0s done
#8 naming to localhost/my-image:functional-421253 done
#8 DONE 0.1s
I1004 03:02:35.067489 1195239 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.24619386 --local dockerfile=/var/lib/minikube/build/build.24619386 --output type=image,name=localhost/my-image:functional-421253: (3.076869247s)
I1004 03:02:35.067566 1195239 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.24619386
I1004 03:02:35.078505 1195239 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.24619386.tar
I1004 03:02:35.091468 1195239 build_images.go:217] Built localhost/my-image:functional-421253 from /tmp/build.24619386.tar
I1004 03:02:35.091501 1195239 build_images.go:133] succeeded building to: functional-421253
I1004 03:02:35.091507 1195239 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-421253
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image load --daemon kicbase/echo-server:functional-421253 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 image load --daemon kicbase/echo-server:functional-421253 --alsologtostderr: (1.225277331s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image load --daemon kicbase/echo-server:functional-421253 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 image load --daemon kicbase/echo-server:functional-421253 --alsologtostderr: (1.02116606s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-421253
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image load --daemon kicbase/echo-server:functional-421253 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-421253 image load --daemon kicbase/echo-server:functional-421253 --alsologtostderr: (1.073039447s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image save kicbase/echo-server:functional-421253 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image rm kicbase/echo-server:functional-421253 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-421253
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-421253 image save --daemon kicbase/echo-server:functional-421253 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-421253
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-421253
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-421253
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-421253
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-150247 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1004 03:03:00.647226 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:04:22.568985 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-150247 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.24747276s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (112.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-150247 -- rollout status deployment/busybox: (29.854200129s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-4zwbx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-6f6nz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-99nxb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-4zwbx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-6f6nz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-99nxb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-4zwbx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-6f6nz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-99nxb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-4zwbx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-4zwbx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-6f6nz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-6f6nz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-99nxb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-150247 -- exec busybox-7dff88458-99nxb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-150247 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-150247 -v=7 --alsologtostderr: (22.65149959s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-150247 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.010558226s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-150247 status --output json -v=7 --alsologtostderr: (1.102196383s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp testdata/cp-test.txt ha-150247:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3195904199/001/cp-test_ha-150247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247:/home/docker/cp-test.txt ha-150247-m02:/home/docker/cp-test_ha-150247_ha-150247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test_ha-150247_ha-150247-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247:/home/docker/cp-test.txt ha-150247-m03:/home/docker/cp-test_ha-150247_ha-150247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test_ha-150247_ha-150247-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247:/home/docker/cp-test.txt ha-150247-m04:/home/docker/cp-test_ha-150247_ha-150247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test_ha-150247_ha-150247-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp testdata/cp-test.txt ha-150247-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3195904199/001/cp-test_ha-150247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m02:/home/docker/cp-test.txt ha-150247:/home/docker/cp-test_ha-150247-m02_ha-150247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test_ha-150247-m02_ha-150247.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m02:/home/docker/cp-test.txt ha-150247-m03:/home/docker/cp-test_ha-150247-m02_ha-150247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test_ha-150247-m02_ha-150247-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m02:/home/docker/cp-test.txt ha-150247-m04:/home/docker/cp-test_ha-150247-m02_ha-150247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test_ha-150247-m02_ha-150247-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp testdata/cp-test.txt ha-150247-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3195904199/001/cp-test_ha-150247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m03:/home/docker/cp-test.txt ha-150247:/home/docker/cp-test_ha-150247-m03_ha-150247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test_ha-150247-m03_ha-150247.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m03:/home/docker/cp-test.txt ha-150247-m02:/home/docker/cp-test_ha-150247-m03_ha-150247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test_ha-150247-m03_ha-150247-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m03:/home/docker/cp-test.txt ha-150247-m04:/home/docker/cp-test_ha-150247-m03_ha-150247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test_ha-150247-m03_ha-150247-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp testdata/cp-test.txt ha-150247-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3195904199/001/cp-test_ha-150247-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m04:/home/docker/cp-test.txt ha-150247:/home/docker/cp-test_ha-150247-m04_ha-150247.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247 "sudo cat /home/docker/cp-test_ha-150247-m04_ha-150247.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m04:/home/docker/cp-test.txt ha-150247-m02:/home/docker/cp-test_ha-150247-m04_ha-150247-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m02 "sudo cat /home/docker/cp-test_ha-150247-m04_ha-150247-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 cp ha-150247-m04:/home/docker/cp-test.txt ha-150247-m03:/home/docker/cp-test_ha-150247-m04_ha-150247-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 ssh -n ha-150247-m03 "sudo cat /home/docker/cp-test_ha-150247-m04_ha-150247-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-150247 node stop m02 -v=7 --alsologtostderr: (12.183325326s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr: exit status 7 (735.775657ms)

                                                
                                                
-- stdout --
	ha-150247
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150247-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150247-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150247-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:06:00.838441 1211480 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:06:00.838627 1211480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:06:00.838640 1211480 out.go:358] Setting ErrFile to fd 2...
	I1004 03:06:00.838645 1211480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:06:00.838916 1211480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:06:00.839114 1211480 out.go:352] Setting JSON to false
	I1004 03:06:00.839163 1211480 mustload.go:65] Loading cluster: ha-150247
	I1004 03:06:00.839228 1211480 notify.go:220] Checking for updates...
	I1004 03:06:00.840537 1211480 config.go:182] Loaded profile config "ha-150247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:06:00.840566 1211480 status.go:174] checking status of ha-150247 ...
	I1004 03:06:00.841297 1211480 cli_runner.go:164] Run: docker container inspect ha-150247 --format={{.State.Status}}
	I1004 03:06:00.859694 1211480 status.go:371] ha-150247 host status = "Running" (err=<nil>)
	I1004 03:06:00.859723 1211480 host.go:66] Checking if "ha-150247" exists ...
	I1004 03:06:00.860049 1211480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-150247
	I1004 03:06:00.882231 1211480 host.go:66] Checking if "ha-150247" exists ...
	I1004 03:06:00.883105 1211480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:06:00.883214 1211480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-150247
	I1004 03:06:00.906095 1211480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34272 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/ha-150247/id_rsa Username:docker}
	I1004 03:06:01.001893 1211480 ssh_runner.go:195] Run: systemctl --version
	I1004 03:06:01.009377 1211480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:06:01.021446 1211480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:06:01.079220 1211480 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-04 03:06:01.069308948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:06:01.080114 1211480 kubeconfig.go:125] found "ha-150247" server: "https://192.168.49.254:8443"
	I1004 03:06:01.080167 1211480 api_server.go:166] Checking apiserver status ...
	I1004 03:06:01.080387 1211480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:06:01.091708 1211480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	I1004 03:06:01.101411 1211480 api_server.go:182] apiserver freezer: "5:freezer:/docker/7256efd147b8f8ec93d4e0d4c1000c6432b1a7fae7b439e35726f8792cd8a4e2/kubepods/burstable/podc28a7a954d5548f4757056046cc69a6f/90aee5a842cb91adca45154fece4f205c1c4af1631c6ae98ef4489af47ea9974"
	I1004 03:06:01.101488 1211480 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7256efd147b8f8ec93d4e0d4c1000c6432b1a7fae7b439e35726f8792cd8a4e2/kubepods/burstable/podc28a7a954d5548f4757056046cc69a6f/90aee5a842cb91adca45154fece4f205c1c4af1631c6ae98ef4489af47ea9974/freezer.state
	I1004 03:06:01.110494 1211480 api_server.go:204] freezer state: "THAWED"
	I1004 03:06:01.110523 1211480 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1004 03:06:01.118515 1211480 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1004 03:06:01.118547 1211480 status.go:463] ha-150247 apiserver status = Running (err=<nil>)
	I1004 03:06:01.118559 1211480 status.go:176] ha-150247 status: &{Name:ha-150247 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:06:01.118577 1211480 status.go:174] checking status of ha-150247-m02 ...
	I1004 03:06:01.118891 1211480 cli_runner.go:164] Run: docker container inspect ha-150247-m02 --format={{.State.Status}}
	I1004 03:06:01.136762 1211480 status.go:371] ha-150247-m02 host status = "Stopped" (err=<nil>)
	I1004 03:06:01.136786 1211480 status.go:384] host is not running, skipping remaining checks
	I1004 03:06:01.136793 1211480 status.go:176] ha-150247-m02 status: &{Name:ha-150247-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:06:01.136813 1211480 status.go:174] checking status of ha-150247-m03 ...
	I1004 03:06:01.137132 1211480 cli_runner.go:164] Run: docker container inspect ha-150247-m03 --format={{.State.Status}}
	I1004 03:06:01.154774 1211480 status.go:371] ha-150247-m03 host status = "Running" (err=<nil>)
	I1004 03:06:01.154802 1211480 host.go:66] Checking if "ha-150247-m03" exists ...
	I1004 03:06:01.155125 1211480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-150247-m03
	I1004 03:06:01.174825 1211480 host.go:66] Checking if "ha-150247-m03" exists ...
	I1004 03:06:01.175157 1211480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:06:01.175204 1211480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-150247-m03
	I1004 03:06:01.193247 1211480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34282 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/ha-150247-m03/id_rsa Username:docker}
	I1004 03:06:01.285038 1211480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:06:01.296588 1211480 kubeconfig.go:125] found "ha-150247" server: "https://192.168.49.254:8443"
	I1004 03:06:01.296620 1211480 api_server.go:166] Checking apiserver status ...
	I1004 03:06:01.296667 1211480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:06:01.307513 1211480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I1004 03:06:01.317187 1211480 api_server.go:182] apiserver freezer: "5:freezer:/docker/f4d98083f496b0ad6d59a7a87be687f4d4fb30ca5ba9e021f44a573d4c72a030/kubepods/burstable/podddc6d148aec8feaac71aa1dd17e2aba1/9f6ee45aeda988b242b7d3fd7944048edd5af04026854e7e2cc7b9ea8cb6d9db"
	I1004 03:06:01.317332 1211480 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f4d98083f496b0ad6d59a7a87be687f4d4fb30ca5ba9e021f44a573d4c72a030/kubepods/burstable/podddc6d148aec8feaac71aa1dd17e2aba1/9f6ee45aeda988b242b7d3fd7944048edd5af04026854e7e2cc7b9ea8cb6d9db/freezer.state
	I1004 03:06:01.326731 1211480 api_server.go:204] freezer state: "THAWED"
	I1004 03:06:01.326774 1211480 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1004 03:06:01.334643 1211480 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1004 03:06:01.334672 1211480 status.go:463] ha-150247-m03 apiserver status = Running (err=<nil>)
	I1004 03:06:01.334682 1211480 status.go:176] ha-150247-m03 status: &{Name:ha-150247-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:06:01.334698 1211480 status.go:174] checking status of ha-150247-m04 ...
	I1004 03:06:01.335023 1211480 cli_runner.go:164] Run: docker container inspect ha-150247-m04 --format={{.State.Status}}
	I1004 03:06:01.351263 1211480 status.go:371] ha-150247-m04 host status = "Running" (err=<nil>)
	I1004 03:06:01.351300 1211480 host.go:66] Checking if "ha-150247-m04" exists ...
	I1004 03:06:01.351630 1211480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-150247-m04
	I1004 03:06:01.383319 1211480 host.go:66] Checking if "ha-150247-m04" exists ...
	I1004 03:06:01.383729 1211480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:06:01.383787 1211480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-150247-m04
	I1004 03:06:01.405551 1211480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34287 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/ha-150247-m04/id_rsa Username:docker}
	I1004 03:06:01.501787 1211480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:06:01.513685 1211480 status.go:176] ha-150247-m04 status: &{Name:ha-150247-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-150247 node start m02 -v=7 --alsologtostderr: (17.481050081s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr: (1.005468062s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.073166081s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-150247 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-150247 -v=7 --alsologtostderr
E1004 03:06:38.695342 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.396690 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.403400 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.414728 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.436154 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.477528 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.558923 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:43.720395 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:44.042103 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:44.684153 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:45.965525 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:48.527387 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:06:53.649489 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-150247 -v=7 --alsologtostderr: (37.398235839s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-150247 --wait=true -v=7 --alsologtostderr
E1004 03:07:03.890819 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:07:06.410651 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:07:24.372404 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:08:05.333895 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-150247 --wait=true -v=7 --alsologtostderr: (1m46.455050266s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-150247
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-150247 node delete m03 -v=7 --alsologtostderr: (9.026840726s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 stop -v=7 --alsologtostderr
E1004 03:09:27.255597 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-150247 stop -v=7 --alsologtostderr: (35.990663911s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr: exit status 7 (109.525492ms)

                                                
                                                
-- stdout --
	ha-150247
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150247-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150247-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:09:32.746428 1225853 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:09:32.746558 1225853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:09:32.746568 1225853 out.go:358] Setting ErrFile to fd 2...
	I1004 03:09:32.746574 1225853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:09:32.746841 1225853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:09:32.747024 1225853 out.go:352] Setting JSON to false
	I1004 03:09:32.747059 1225853 mustload.go:65] Loading cluster: ha-150247
	I1004 03:09:32.747113 1225853 notify.go:220] Checking for updates...
	I1004 03:09:32.747537 1225853 config.go:182] Loaded profile config "ha-150247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:09:32.747552 1225853 status.go:174] checking status of ha-150247 ...
	I1004 03:09:32.748081 1225853 cli_runner.go:164] Run: docker container inspect ha-150247 --format={{.State.Status}}
	I1004 03:09:32.767837 1225853 status.go:371] ha-150247 host status = "Stopped" (err=<nil>)
	I1004 03:09:32.767861 1225853 status.go:384] host is not running, skipping remaining checks
	I1004 03:09:32.767869 1225853 status.go:176] ha-150247 status: &{Name:ha-150247 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:09:32.767902 1225853 status.go:174] checking status of ha-150247-m02 ...
	I1004 03:09:32.768213 1225853 cli_runner.go:164] Run: docker container inspect ha-150247-m02 --format={{.State.Status}}
	I1004 03:09:32.793978 1225853 status.go:371] ha-150247-m02 host status = "Stopped" (err=<nil>)
	I1004 03:09:32.794002 1225853 status.go:384] host is not running, skipping remaining checks
	I1004 03:09:32.794010 1225853 status.go:176] ha-150247-m02 status: &{Name:ha-150247-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:09:32.794038 1225853 status.go:174] checking status of ha-150247-m04 ...
	I1004 03:09:32.794332 1225853 cli_runner.go:164] Run: docker container inspect ha-150247-m04 --format={{.State.Status}}
	I1004 03:09:32.811566 1225853 status.go:371] ha-150247-m04 host status = "Stopped" (err=<nil>)
	I1004 03:09:32.811586 1225853 status.go:384] host is not running, skipping remaining checks
	I1004 03:09:32.811593 1225853 status.go:176] ha-150247-m04 status: &{Name:ha-150247-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-150247 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-150247 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.392791602s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-150247 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-150247 --control-plane -v=7 --alsologtostderr: (41.954485503s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-150247 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.050757228s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-603710 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1004 03:12:11.097683 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-603710 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (47.698514899s)
--- PASS: TestJSONOutput/start/Command (47.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-603710 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-603710 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-603710 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-603710 --output=json --user=testUser: (5.807661598s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-408076 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-408076 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.33263ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"310ffbd0-d80a-4039-b891-94f01218fb4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-408076] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff84316b-5f99-46b5-9ce2-9b9f15651b80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"199a7d00-117a-40ab-9f52-f395a069e7fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"49e353ae-5df5-43bc-b1ee-17d9b186f3d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig"}}
	{"specversion":"1.0","id":"42622197-9f2f-4776-ab44-808f37a002de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube"}}
	{"specversion":"1.0","id":"c0ee5866-8576-4b31-aaa7-dfd2c50de400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"10c7688b-70e7-479c-ab73-616b9ddea41e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"47e0604d-b4d7-4b88-8bd6-d505191542dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-408076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-408076
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-522117 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-522117 --network=: (35.477941009s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-522117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-522117
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-522117: (2.053621178s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-662080 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-662080 --network=bridge: (30.892821187s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-662080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-662080
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-662080: (1.90131115s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.82s)

                                                
                                    
x
+
TestKicExistingNetwork (32.02s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1004 03:13:54.720718 1154813 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1004 03:13:54.736932 1154813 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1004 03:13:54.737027 1154813 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1004 03:13:54.737045 1154813 cli_runner.go:164] Run: docker network inspect existing-network
W1004 03:13:54.751661 1154813 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1004 03:13:54.751695 1154813 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1004 03:13:54.751710 1154813 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1004 03:13:54.751812 1154813 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1004 03:13:54.767834 1154813 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9a360696897c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9f:3a:5e:6c} reservation:<nil>}
I1004 03:13:54.768223 1154813 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c3b850}
I1004 03:13:54.768253 1154813 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1004 03:13:54.768462 1154813 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1004 03:13:54.841466 1154813 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-489906 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-489906 --network=existing-network: (29.85066558s)
helpers_test.go:175: Cleaning up "existing-network-489906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-489906
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-489906: (2.019256003s)
I1004 03:14:26.727283 1154813 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.02s)

                                                
                                    
x
+
TestKicCustomSubnet (30.85s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-550712 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-550712 --subnet=192.168.60.0/24: (28.703759313s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-550712 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-550712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-550712
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-550712: (2.125436975s)
--- PASS: TestKicCustomSubnet (30.85s)

                                                
                                    
x
+
TestKicStaticIP (33.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-714165 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-714165 --static-ip=192.168.200.200: (30.838923398s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-714165 ip
helpers_test.go:175: Cleaning up "static-ip-714165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-714165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-714165: (2.129629803s)
--- PASS: TestKicStaticIP (33.13s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-831705 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-831705 --driver=docker  --container-runtime=containerd: (29.435631215s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-834520 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-834520 --driver=docker  --container-runtime=containerd: (33.960330057s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-831705
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-834520
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-834520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-834520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-834520: (2.060871512s)
helpers_test.go:175: Cleaning up "first-831705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-831705
E1004 03:16:38.694664 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-831705: (1.938589543s)
--- PASS: TestMinikubeProfile (68.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-416595 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1004 03:16:43.393994 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-416595 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.46593136s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-416595 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-418876 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-418876 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.642347973s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-418876 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-416595 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-416595 --alsologtostderr -v=5: (1.621655831s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-418876 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-418876
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-418876: (1.219946333s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-418876
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-418876: (7.866568538s)
--- PASS: TestMountStart/serial/RestartStopped (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-418876 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-589363 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1004 03:18:01.772430 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-589363 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m44.475780327s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-589363 -- rollout status deployment/busybox: (14.056130517s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-5h4lc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-bzhnv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-5h4lc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-bzhnv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-5h4lc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-bzhnv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-5h4lc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-5h4lc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-bzhnv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-589363 -- exec busybox-7dff88458-bzhnv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-589363 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-589363 -v 3 --alsologtostderr: (17.48036486s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-589363 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp testdata/cp-test.txt multinode-589363:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2987148984/001/cp-test_multinode-589363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363:/home/docker/cp-test.txt multinode-589363-m02:/home/docker/cp-test_multinode-589363_multinode-589363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m02 "sudo cat /home/docker/cp-test_multinode-589363_multinode-589363-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363:/home/docker/cp-test.txt multinode-589363-m03:/home/docker/cp-test_multinode-589363_multinode-589363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m03 "sudo cat /home/docker/cp-test_multinode-589363_multinode-589363-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp testdata/cp-test.txt multinode-589363-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2987148984/001/cp-test_multinode-589363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363-m02:/home/docker/cp-test.txt multinode-589363:/home/docker/cp-test_multinode-589363-m02_multinode-589363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363 "sudo cat /home/docker/cp-test_multinode-589363-m02_multinode-589363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363-m02:/home/docker/cp-test.txt multinode-589363-m03:/home/docker/cp-test_multinode-589363-m02_multinode-589363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m03 "sudo cat /home/docker/cp-test_multinode-589363-m02_multinode-589363-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp testdata/cp-test.txt multinode-589363-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2987148984/001/cp-test_multinode-589363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363-m03:/home/docker/cp-test.txt multinode-589363:/home/docker/cp-test_multinode-589363-m03_multinode-589363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363 "sudo cat /home/docker/cp-test_multinode-589363-m03_multinode-589363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 cp multinode-589363-m03:/home/docker/cp-test.txt multinode-589363-m02:/home/docker/cp-test_multinode-589363-m03_multinode-589363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 ssh -n multinode-589363-m02 "sudo cat /home/docker/cp-test_multinode-589363-m03_multinode-589363-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-589363 node stop m03: (1.240796816s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-589363 status: exit status 7 (525.018977ms)

                                                
                                                
-- stdout --
	multinode-589363
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-589363-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-589363-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr: exit status 7 (549.045374ms)

                                                
                                                
-- stdout --
	multinode-589363
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-589363-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-589363-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:19:39.655129 1279293 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:19:39.655356 1279293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:19:39.655385 1279293 out.go:358] Setting ErrFile to fd 2...
	I1004 03:19:39.655406 1279293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:19:39.655720 1279293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:19:39.655952 1279293 out.go:352] Setting JSON to false
	I1004 03:19:39.656018 1279293 mustload.go:65] Loading cluster: multinode-589363
	I1004 03:19:39.656056 1279293 notify.go:220] Checking for updates...
	I1004 03:19:39.656601 1279293 config.go:182] Loaded profile config "multinode-589363": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:19:39.656657 1279293 status.go:174] checking status of multinode-589363 ...
	I1004 03:19:39.657653 1279293 cli_runner.go:164] Run: docker container inspect multinode-589363 --format={{.State.Status}}
	I1004 03:19:39.680753 1279293 status.go:371] multinode-589363 host status = "Running" (err=<nil>)
	I1004 03:19:39.680786 1279293 host.go:66] Checking if "multinode-589363" exists ...
	I1004 03:19:39.681117 1279293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-589363
	I1004 03:19:39.716890 1279293 host.go:66] Checking if "multinode-589363" exists ...
	I1004 03:19:39.717209 1279293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:19:39.717277 1279293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-589363
	I1004 03:19:39.737632 1279293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34392 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/multinode-589363/id_rsa Username:docker}
	I1004 03:19:39.830015 1279293 ssh_runner.go:195] Run: systemctl --version
	I1004 03:19:39.834687 1279293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:19:39.847049 1279293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:19:39.900790 1279293 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-04 03:19:39.890994996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:19:39.901384 1279293 kubeconfig.go:125] found "multinode-589363" server: "https://192.168.67.2:8443"
	I1004 03:19:39.901422 1279293 api_server.go:166] Checking apiserver status ...
	I1004 03:19:39.901467 1279293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 03:19:39.912556 1279293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I1004 03:19:39.921563 1279293 api_server.go:182] apiserver freezer: "5:freezer:/docker/567401c1ae6c55d5c97a2f1234b7f7ab0bbc0539bef5ba96ef029e6f2f54e703/kubepods/burstable/poda09463602cb855a1b8fc544f5dc2c100/e3f7a106541aeb8bbfe23fd416d42648c7439dd0f4bc73d2169a6482bfd2f670"
	I1004 03:19:39.921638 1279293 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/567401c1ae6c55d5c97a2f1234b7f7ab0bbc0539bef5ba96ef029e6f2f54e703/kubepods/burstable/poda09463602cb855a1b8fc544f5dc2c100/e3f7a106541aeb8bbfe23fd416d42648c7439dd0f4bc73d2169a6482bfd2f670/freezer.state
	I1004 03:19:39.930729 1279293 api_server.go:204] freezer state: "THAWED"
	I1004 03:19:39.930762 1279293 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1004 03:19:39.938515 1279293 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1004 03:19:39.938542 1279293 status.go:463] multinode-589363 apiserver status = Running (err=<nil>)
	I1004 03:19:39.938553 1279293 status.go:176] multinode-589363 status: &{Name:multinode-589363 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:19:39.938601 1279293 status.go:174] checking status of multinode-589363-m02 ...
	I1004 03:19:39.938942 1279293 cli_runner.go:164] Run: docker container inspect multinode-589363-m02 --format={{.State.Status}}
	I1004 03:19:39.955499 1279293 status.go:371] multinode-589363-m02 host status = "Running" (err=<nil>)
	I1004 03:19:39.955529 1279293 host.go:66] Checking if "multinode-589363-m02" exists ...
	I1004 03:19:39.955858 1279293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-589363-m02
	I1004 03:19:39.973258 1279293 host.go:66] Checking if "multinode-589363-m02" exists ...
	I1004 03:19:39.973567 1279293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 03:19:39.973615 1279293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-589363-m02
	I1004 03:19:39.990245 1279293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34397 SSHKeyPath:/home/jenkins/minikube-integration/19546-1149434/.minikube/machines/multinode-589363-m02/id_rsa Username:docker}
	I1004 03:19:40.098409 1279293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 03:19:40.111987 1279293 status.go:176] multinode-589363-m02 status: &{Name:multinode-589363-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:19:40.112040 1279293 status.go:174] checking status of multinode-589363-m03 ...
	I1004 03:19:40.112426 1279293 cli_runner.go:164] Run: docker container inspect multinode-589363-m03 --format={{.State.Status}}
	I1004 03:19:40.133596 1279293 status.go:371] multinode-589363-m03 host status = "Stopped" (err=<nil>)
	I1004 03:19:40.133621 1279293 status.go:384] host is not running, skipping remaining checks
	I1004 03:19:40.133630 1279293 status.go:176] multinode-589363-m03 status: &{Name:multinode-589363-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-589363 node start m03 -v=7 --alsologtostderr: (9.023347557s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-589363
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-589363
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-589363: (24.966704865s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-589363 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-589363 --wait=true -v=8 --alsologtostderr: (54.283112229s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-589363
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-589363 node delete m03: (4.659732008s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-589363 stop: (23.817072391s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-589363 status: exit status 7 (98.132845ms)

                                                
                                                
-- stdout --
	multinode-589363
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-589363-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr: exit status 7 (88.951503ms)

                                                
                                                
-- stdout --
	multinode-589363
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-589363-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:21:38.610055 1287289 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:21:38.610256 1287289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:21:38.610283 1287289 out.go:358] Setting ErrFile to fd 2...
	I1004 03:21:38.610303 1287289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:21:38.610566 1287289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:21:38.610783 1287289 out.go:352] Setting JSON to false
	I1004 03:21:38.610838 1287289 mustload.go:65] Loading cluster: multinode-589363
	I1004 03:21:38.610871 1287289 notify.go:220] Checking for updates...
	I1004 03:21:38.611329 1287289 config.go:182] Loaded profile config "multinode-589363": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:21:38.611367 1287289 status.go:174] checking status of multinode-589363 ...
	I1004 03:21:38.611941 1287289 cli_runner.go:164] Run: docker container inspect multinode-589363 --format={{.State.Status}}
	I1004 03:21:38.629831 1287289 status.go:371] multinode-589363 host status = "Stopped" (err=<nil>)
	I1004 03:21:38.629854 1287289 status.go:384] host is not running, skipping remaining checks
	I1004 03:21:38.629861 1287289 status.go:176] multinode-589363 status: &{Name:multinode-589363 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 03:21:38.629898 1287289 status.go:174] checking status of multinode-589363-m02 ...
	I1004 03:21:38.630196 1287289 cli_runner.go:164] Run: docker container inspect multinode-589363-m02 --format={{.State.Status}}
	I1004 03:21:38.649929 1287289 status.go:371] multinode-589363-m02 host status = "Stopped" (err=<nil>)
	I1004 03:21:38.649954 1287289 status.go:384] host is not running, skipping remaining checks
	I1004 03:21:38.649962 1287289 status.go:176] multinode-589363-m02 status: &{Name:multinode-589363-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-589363 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1004 03:21:38.694617 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:21:43.393842 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-589363 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.591275058s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-589363 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-589363
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-589363-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-589363-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.693663ms)

                                                
                                                
-- stdout --
	* [multinode-589363-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-589363-m02' is duplicated with machine name 'multinode-589363-m02' in profile 'multinode-589363'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-589363-m03 --driver=docker  --container-runtime=containerd
E1004 03:23:06.459408 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-589363-m03 --driver=docker  --container-runtime=containerd: (33.042143061s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-589363
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-589363: exit status 80 (347.970926ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-589363 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-589363-m03 already exists in multinode-589363-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-589363-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-589363-m03: (1.962861811s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.51s)

                                                
                                    
x
+
TestPreload (116.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-464231 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-464231 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m19.981548751s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-464231 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-464231 image pull gcr.io/k8s-minikube/busybox: (2.08738709s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-464231
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-464231: (12.030930044s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-464231 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-464231 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.878546174s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-464231 image list
helpers_test.go:175: Cleaning up "test-preload-464231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-464231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-464231: (2.49041324s)
--- PASS: TestPreload (116.80s)

                                                
                                    
x
+
TestScheduledStopUnix (107.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-449503 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-449503 --memory=2048 --driver=docker  --container-runtime=containerd: (31.319023384s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-449503 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-449503 -n scheduled-stop-449503
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-449503 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1004 03:25:42.970689 1154813 retry.go:31] will retry after 93.787µs: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.971860 1154813 retry.go:31] will retry after 108.382µs: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.972975 1154813 retry.go:31] will retry after 208.589µs: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.974097 1154813 retry.go:31] will retry after 378.202µs: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.975178 1154813 retry.go:31] will retry after 416.819µs: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.976357 1154813 retry.go:31] will retry after 1.061155ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.977499 1154813 retry.go:31] will retry after 660.636µs: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.978628 1154813 retry.go:31] will retry after 1.233594ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.980799 1154813 retry.go:31] will retry after 3.324505ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.985023 1154813 retry.go:31] will retry after 5.508648ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.991244 1154813 retry.go:31] will retry after 5.319159ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:42.997481 1154813 retry.go:31] will retry after 9.766675ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:43.007742 1154813 retry.go:31] will retry after 18.638052ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:43.027012 1154813 retry.go:31] will retry after 14.209717ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:43.042284 1154813 retry.go:31] will retry after 36.078715ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
I1004 03:25:43.078464 1154813 retry.go:31] will retry after 64.443134ms: open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/scheduled-stop-449503/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-449503 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-449503 -n scheduled-stop-449503
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-449503
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-449503 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1004 03:26:38.694821 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:26:43.397936 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-449503
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-449503: exit status 7 (66.625864ms)

                                                
                                                
-- stdout --
	scheduled-stop-449503
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-449503 -n scheduled-stop-449503
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-449503 -n scheduled-stop-449503: exit status 7 (74.417656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-449503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-449503
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-449503: (4.739037274s)
--- PASS: TestScheduledStopUnix (107.62s)

                                                
                                    
x
+
TestInsufficientStorage (10.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-130235 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-130235 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.874256497s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b86aaad8-bc8b-4323-a6ce-e4fcf66a5781","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-130235] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2ee3515-a0f3-4991-9c58-e25285be7fba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"ab3f803d-5f0a-461a-8ffd-14c27f5fee23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88dc2a70-2ad9-4678-8c14-9754aa13c7d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig"}}
	{"specversion":"1.0","id":"20891f0b-4b06-4939-bcc9-1a9f079357c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube"}}
	{"specversion":"1.0","id":"77437c0b-2c73-48c6-a94a-c37b0d481d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"581961db-3fd6-4548-afc4-9db467dfe34c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a8e2c8c-3d63-40c4-b80f-26cfc937e9ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f8146b94-b229-4958-a63c-625c3460619f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6f2876c4-cfd4-485a-b0dc-d4bd5fde64c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4415c6a3-8576-4342-bd52-c406d0946bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1e8553c0-1955-4c4d-9528-d39b2223964b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-130235\" primary control-plane node in \"insufficient-storage-130235\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1398a2dc-ad80-432b-bba3-e5d00b9ecc50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"80adad5e-25b4-468a-b2dc-3d0f5f14911a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"987046c0-550b-47f4-8104-790461e3c7e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-130235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-130235 --output=json --layout=cluster: exit status 7 (279.032264ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-130235","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-130235","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 03:27:06.933480 1305908 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-130235" does not appear in /home/jenkins/minikube-integration/19546-1149434/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-130235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-130235 --output=json --layout=cluster: exit status 7 (271.975728ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-130235","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-130235","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 03:27:07.208134 1305970 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-130235" does not appear in /home/jenkins/minikube-integration/19546-1149434/kubeconfig
	E1004 03:27:07.218171 1305970 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/insufficient-storage-130235/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-130235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-130235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-130235: (1.879753325s)
--- PASS: TestInsufficientStorage (10.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2633166831 start -p running-upgrade-520136 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2633166831 start -p running-upgrade-520136 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (42.804337442s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-520136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-520136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.577506438s)
helpers_test.go:175: Cleaning up "running-upgrade-520136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-520136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-520136: (2.66063327s)
--- PASS: TestRunningBinaryUpgrade (77.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.11816622s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-231384
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-231384: (1.381706385s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-231384 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-231384 status --format={{.Host}}: exit status 7 (131.77628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.984165311s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-231384 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (96.088759ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-231384] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-231384
	    minikube start -p kubernetes-upgrade-231384 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2313842 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-231384 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-231384 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.995950624s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-231384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-231384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-231384: (2.153893997s)
--- PASS: TestKubernetesUpgrade (354.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2644675111 start -p missing-upgrade-030889 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2644675111 start -p missing-upgrade-030889 --memory=2200 --driver=docker  --container-runtime=containerd: (1m35.671804875s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-030889
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-030889: (10.348083135s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-030889
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-030889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-030889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.363655355s)
helpers_test.go:175: Cleaning up "missing-upgrade-030889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-030889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-030889: (2.415384349s)
--- PASS: TestMissingContainerUpgrade (171.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-543091 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-543091 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (83.267798ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-543091] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-543091 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-543091 --driver=docker  --container-runtime=containerd: (36.958032138s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-543091 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-543091 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-543091 --no-kubernetes --driver=docker  --container-runtime=containerd: (18.727449935s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-543091 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-543091 status -o json: exit status 2 (408.552204ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-543091","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-543091
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-543091: (1.926769523s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-543091 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-543091 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.625442504s)
--- PASS: TestNoKubernetes/serial/Start (5.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-543091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-543091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.878279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-543091
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-543091: (1.213050391s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-543091 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-543091 --driver=docker  --container-runtime=containerd: (6.720182681s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-543091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-543091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (325.999955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2263673721 start -p stopped-upgrade-667124 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2263673721 start -p stopped-upgrade-667124 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.472562455s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2263673721 -p stopped-upgrade-667124 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2263673721 -p stopped-upgrade-667124 stop: (1.284693671s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-667124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1004 03:31:38.698846 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:31:43.393558 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-667124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m25.080473502s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-667124
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-667124: (1.282445016s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestPause/serial/Start (92.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-019087 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-019087 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.117865902s)
--- PASS: TestPause/serial/Start (92.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-372130 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-372130 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (191.451923ms)

                                                
                                                
-- stdout --
	* [false-372130] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 03:34:53.958752 1345396 out.go:345] Setting OutFile to fd 1 ...
	I1004 03:34:53.958940 1345396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:34:53.958952 1345396 out.go:358] Setting ErrFile to fd 2...
	I1004 03:34:53.958957 1345396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1004 03:34:53.959219 1345396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19546-1149434/.minikube/bin
	I1004 03:34:53.959661 1345396 out.go:352] Setting JSON to false
	I1004 03:34:53.960727 1345396 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26242,"bootTime":1727986652,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1004 03:34:53.960802 1345396 start.go:139] virtualization:  
	I1004 03:34:53.963769 1345396 out.go:177] * [false-372130] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1004 03:34:53.965361 1345396 out.go:177]   - MINIKUBE_LOCATION=19546
	I1004 03:34:53.965476 1345396 notify.go:220] Checking for updates...
	I1004 03:34:53.968566 1345396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 03:34:53.970729 1345396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19546-1149434/kubeconfig
	I1004 03:34:53.972559 1345396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19546-1149434/.minikube
	I1004 03:34:53.974451 1345396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1004 03:34:53.979253 1345396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 03:34:53.981606 1345396 config.go:182] Loaded profile config "pause-019087": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1004 03:34:53.981727 1345396 driver.go:394] Setting default libvirt URI to qemu:///system
	I1004 03:34:54.014236 1345396 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1004 03:34:54.014392 1345396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1004 03:34:54.085543 1345396 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-04 03:34:54.074356339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1004 03:34:54.085670 1345396 docker.go:318] overlay module found
	I1004 03:34:54.088010 1345396 out.go:177] * Using the docker driver based on user configuration
	I1004 03:34:54.090153 1345396 start.go:297] selected driver: docker
	I1004 03:34:54.090178 1345396 start.go:901] validating driver "docker" against <nil>
	I1004 03:34:54.090195 1345396 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 03:34:54.093076 1345396 out.go:201] 
	W1004 03:34:54.095790 1345396 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1004 03:34:54.098064 1345396 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-372130 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-372130" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:34:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-019087
contexts:
- context:
cluster: pause-019087
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:34:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-019087
name: pause-019087
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-019087
user:
client-certificate: /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/pause-019087/client.crt
client-key: /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/pause-019087/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-372130

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-372130"

                                                
                                                
----------------------- debugLogs end: false-372130 [took: 3.397021128s] --------------------------------
helpers_test.go:175: Cleaning up "false-372130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-372130
--- PASS: TestNetworkPlugins/group/false (3.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-019087 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-019087 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.343675274s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-019087 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-019087 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-019087 --output=json --layout=cluster: exit status 2 (389.365937ms)

                                                
                                                
-- stdout --
	{"Name":"pause-019087","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-019087","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-019087 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-019087 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-019087 --alsologtostderr -v=5: (1.129120717s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-019087 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-019087 --alsologtostderr -v=5: (2.810849867s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-019087
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-019087: exit status 1 (22.360681ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-019087: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (173.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-445570 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1004 03:36:38.695546 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:36:43.397319 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-445570 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m53.515891353s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (173.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-445570 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a7aa20e8-4bc3-4bda-b198-b48544d1536e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a7aa20e8-4bc3-4bda-b198-b48544d1536e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004023623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-445570 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-554493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-554493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m18.003664503s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-445570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-445570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.30924047s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-445570 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-445570 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-445570 --alsologtostderr -v=3: (12.8309874s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-445570 -n old-k8s-version-445570
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-445570 -n old-k8s-version-445570: exit status 7 (78.160561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-445570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-554493 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c50323f1-cf3b-42ec-adf6-f43158f43ce5] Pending
helpers_test.go:344: "busybox" [c50323f1-cf3b-42ec-adf6-f43158f43ce5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c50323f1-cf3b-42ec-adf6-f43158f43ce5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004069371s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-554493 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-554493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-554493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048169279s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-554493 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-554493 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-554493 --alsologtostderr -v=3: (12.376811819s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-554493 -n no-preload-554493
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-554493 -n no-preload-554493: exit status 7 (79.786087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-554493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-554493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1004 03:41:38.695497 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:41:43.394375 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-554493 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.272814888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-554493 -n no-preload-554493
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7kx75" [b08f65bd-ab74-42cb-b8fb-266fa1639a25] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.016701767s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7kx75" [b08f65bd-ab74-42cb-b8fb-266fa1639a25] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004693586s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-554493 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-554493 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-554493 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-554493 --alsologtostderr -v=1: (1.020229318s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-554493 -n no-preload-554493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-554493 -n no-preload-554493: exit status 2 (395.973891ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-554493 -n no-preload-554493
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-554493 -n no-preload-554493: exit status 2 (338.324298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-554493 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-554493 -n no-preload-554493
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-554493 -n no-preload-554493
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-690742 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-690742 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m48.563525195s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-psgpt" [87a45dd6-3902-4d92-8554-82367fdd9cab] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003932159s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-psgpt" [87a45dd6-3902-4d92-8554-82367fdd9cab] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003922785s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-445570 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-445570 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-445570 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-445570 -n old-k8s-version-445570
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-445570 -n old-k8s-version-445570: exit status 2 (377.368169ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-445570 -n old-k8s-version-445570
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-445570 -n old-k8s-version-445570: exit status 2 (412.381922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-445570 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-445570 -n old-k8s-version-445570
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-445570 -n old-k8s-version-445570
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-202429 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1004 03:46:38.695593 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:46:43.394479 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-202429 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m31.099755417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-690742 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [adb08341-bb38-4570-827d-67ed9604be4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [adb08341-bb38-4570-827d-67ed9604be4f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003401767s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-690742 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-690742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-690742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044800947s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-690742 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-690742 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-690742 --alsologtostderr -v=3: (12.424018868s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-202429 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ebb28ed4-917c-4c6e-a281-d4e848f5b883] Pending
helpers_test.go:344: "busybox" [ebb28ed4-917c-4c6e-a281-d4e848f5b883] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ebb28ed4-917c-4c6e-a281-d4e848f5b883] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004842198s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-202429 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-202429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-202429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.110904423s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-202429 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-690742 -n embed-certs-690742
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-690742 -n embed-certs-690742: exit status 7 (135.585627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-690742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-202429 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-202429 --alsologtostderr -v=3: (12.341096202s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-690742 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-690742 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m34.276294473s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-690742 -n embed-certs-690742
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429: exit status 7 (112.732172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-202429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-202429 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1004 03:49:13.978223 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:13.984544 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:13.995848 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:14.017232 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:14.058561 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:14.139950 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:14.302477 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:14.624271 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:15.266432 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:16.547838 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:19.110385 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:24.232405 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:34.474020 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:49:54.956411 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.422545 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.429141 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.440609 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.461969 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.503365 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.584813 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:34.746303 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:35.067804 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:35.709143 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:35.918273 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:36.990764 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:39.553020 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:44.674989 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:50:54.916316 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:15.398146 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:21.777961 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:38.695484 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:43.393587 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:56.359445 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:51:57.840602 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-202429 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m53.954437078s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jhl8x" [73a0f727-f99c-44bc-84dc-92a370f66d3d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003555527s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jhl8x" [73a0f727-f99c-44bc-84dc-92a370f66d3d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0035773s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-690742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-690742 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-690742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-690742 -n embed-certs-690742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-690742 -n embed-certs-690742: exit status 2 (316.11567ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-690742 -n embed-certs-690742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-690742 -n embed-certs-690742: exit status 2 (316.152682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-690742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-690742 -n embed-certs-690742
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-690742 -n embed-certs-690742
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-146436 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-146436 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (35.854217233s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9nn99" [f2f8e401-8870-40e8-a029-1809ea12702c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005737603s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9nn99" [f2f8e401-8870-40e8-a029-1809ea12702c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004437309s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-202429 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-202429 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-202429 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-202429 --alsologtostderr -v=1: (1.048129629s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429: exit status 2 (436.402101ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429: exit status 2 (461.83644ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-202429 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-202429 --alsologtostderr -v=1: (1.048262251s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-202429 -n default-k8s-diff-port-202429
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (96.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m36.92021771s)
--- PASS: TestNetworkPlugins/group/auto/Start (96.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-146436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-146436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.424509375s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-146436 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-146436 --alsologtostderr -v=3: (3.051918765s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-146436 -n newest-cni-146436
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-146436 -n newest-cni-146436: exit status 7 (100.875009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-146436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-146436 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-146436 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (23.609360669s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-146436 -n newest-cni-146436
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-146436 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-146436 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-146436 --alsologtostderr -v=1: (1.190811978s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-146436 -n newest-cni-146436
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-146436 -n newest-cni-146436: exit status 2 (409.427557ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-146436 -n newest-cni-146436
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-146436 -n newest-cni-146436: exit status 2 (388.052315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-146436 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-146436 -n newest-cni-146436
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-146436 -n newest-cni-146436
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1004 03:54:13.978170 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:54:41.682663 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/old-k8s-version-445570/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.979110446s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-372130 "pgrep -a kubelet"
I1004 03:54:52.658432 1154813 config.go:182] Loaded profile config "custom-flannel-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9jxhz" [62da2520-419b-411c-842e-d06e97f0a18b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9jxhz" [62da2520-419b-411c-842e-d06e97f0a18b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003787208s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-372130 "pgrep -a kubelet"
I1004 03:54:56.739601 1154813 config.go:182] Loaded profile config "auto-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4lkhk" [378877d2-2906-43de-af84-785c224fd1fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4lkhk" [378877d2-2906-43de-af84-785c224fd1fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004552996s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (100.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m40.693202447s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (100.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1004 03:55:34.422553 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:56:02.124450 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/no-preload-554493/client.crt: no such file or directory" logger="UnhandledError"
E1004 03:56:26.462485 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/functional-421253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.369116018s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rvthg" [24bd7a3a-1840-44ac-a0f7-590ffadd1022] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003881081s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-372130 "pgrep -a kubelet"
I1004 03:56:33.456312 1154813 config.go:182] Loaded profile config "flannel-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8s6p9" [4ac3d6d8-4d4f-4652-a0b4-10319326b5be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8s6p9" [4ac3d6d8-4d4f-4652-a0b4-10319326b5be] Running
E1004 03:56:38.694679 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/addons-813566/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003910974s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (43.149657134s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nn5f5" [f3356a48-ad51-4e11-be01-f07b3975448e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005473429s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-372130 "pgrep -a kubelet"
I1004 03:57:16.281996 1154813 config.go:182] Loaded profile config "kindnet-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-89t4n" [4442267e-a079-47b3-8ac5-f926b46348b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-89t4n" [4442267e-a079-47b3-8ac5-f926b46348b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003610119s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-372130 "pgrep -a kubelet"
E1004 03:57:48.066600 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/default-k8s-diff-port-202429/client.crt: no such file or directory" logger="UnhandledError"
I1004 03:57:48.089000 1154813 config.go:182] Loaded profile config "enable-default-cni-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zhgq9" [c2fa997a-c760-496e-a35c-1fc675cf36b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zhgq9" [c2fa997a-c760-496e-a35c-1fc675cf36b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003691436s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1004 03:57:53.188559 1154813 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/default-k8s-diff-port-202429/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (54.28025027s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-372130 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.865012028s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-372130 "pgrep -a kubelet"
I1004 03:58:46.379633 1154813 config.go:182] Loaded profile config "bridge-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c22zn" [c983053e-cc34-40d1-b5d1-4c4ef5c68149] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c22zn" [c983053e-cc34-40d1-b5d1-4c4ef5c68149] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003394791s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2t9nj" [05a28cfe-e55c-4dcc-9d3c-563631d09333] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00382209s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-372130 "pgrep -a kubelet"
I1004 03:59:39.805619 1154813 config.go:182] Loaded profile config "calico-372130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-372130 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q2zcq" [683ff3d4-8261-42e9-829f-4001716abd1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q2zcq" [683ff3d4-8261-42e9-829f-4001716abd1e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.00396196s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-372130 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-372130 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    

Test skip (27/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-341067 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-341067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-341067
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:423: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-102319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-102319
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-372130 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-372130" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:34:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-019087
contexts:
- context:
cluster: pause-019087
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:34:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-019087
name: pause-019087
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-019087
user:
client-certificate: /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/pause-019087/client.crt
client-key: /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/pause-019087/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-372130

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-372130"

                                                
                                                
----------------------- debugLogs end: kubenet-372130 [took: 3.309708257s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-372130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-372130
--- SKIP: TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-372130 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-372130" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19546-1149434/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:34:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-019087
contexts:
- context:
cluster: pause-019087
extensions:
- extension:
last-update: Fri, 04 Oct 2024 03:34:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-019087
name: pause-019087
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-019087
user:
client-certificate: /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/pause-019087/client.crt
client-key: /home/jenkins/minikube-integration/19546-1149434/.minikube/profiles/pause-019087/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-372130

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-372130" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-372130"

                                                
                                                
----------------------- debugLogs end: cilium-372130 [took: 4.148467725s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-372130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-372130
--- SKIP: TestNetworkPlugins/group/cilium (4.30s)

                                                
                                    
Copied to clipboard