Test Report: Docker_Linux_containerd_arm64 19479

                    
                      913baf54a454bfbef3be1ea09a51779f85ec9369:2024-08-19:35854
                    
                

Test fail (4/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.49
51 TestDockerEnvContainerd 51.35
97 TestFunctional/parallel/PersistentVolumeClaim 188.82
302 TestStartStop/group/old-k8s-version/serial/SecondStart 382.41
x
+
TestAddons/serial/Volcano (200.49s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 37.695633ms
addons_test.go:897: volcano-scheduler stabilized in 37.783928ms
addons_test.go:905: volcano-admission stabilized in 38.506548ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-76ltd" [45bd4d2d-4b71-4d17-8c17-6a3fe4fed238] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003271469s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-wvwhd" [0a86e382-fc17-4b95-8016-fa94e81e8f61] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004409494s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-lw997" [f4fa4306-d414-4644-8815-5168552c528c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004693415s
addons_test.go:932: (dbg) Run:  kubectl --context addons-789485 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-789485 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-789485 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [32b1414f-6cc8-497b-8684-893e79dab925] Pending
helpers_test.go:344: "test-job-nginx-0" [32b1414f-6cc8-497b-8684-893e79dab925] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-789485 -n addons-789485
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-19 13:02:58.692463378 +0000 UTC m=+429.051415596
addons_test.go:964: (dbg) Run:  kubectl --context addons-789485 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-789485 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-b8537eab-967a-4c02-bf58-abbf00f1cca8
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qk6g9 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-qk6g9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-789485 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-789485 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-789485
helpers_test.go:235: (dbg) docker inspect addons-789485:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32ec921828636abb7706b42553763141cf9acdb46a2daebff70417b4d07b3041",
	        "Created": "2024-08-19T12:56:30.137606489Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4147805,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T12:56:30.300461566Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/32ec921828636abb7706b42553763141cf9acdb46a2daebff70417b4d07b3041/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32ec921828636abb7706b42553763141cf9acdb46a2daebff70417b4d07b3041/hostname",
	        "HostsPath": "/var/lib/docker/containers/32ec921828636abb7706b42553763141cf9acdb46a2daebff70417b4d07b3041/hosts",
	        "LogPath": "/var/lib/docker/containers/32ec921828636abb7706b42553763141cf9acdb46a2daebff70417b4d07b3041/32ec921828636abb7706b42553763141cf9acdb46a2daebff70417b4d07b3041-json.log",
	        "Name": "/addons-789485",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-789485:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-789485",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d8b5fd2ed9a2808fde66b4171ffd9c2cef6bd3eeb8b47e6fafb23e58223a059-init/diff:/var/lib/docker/overlay2/f9730c920ad297aa3b42f5a0ebbe1c9311721ca848f3268205322d3e26bf32e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d8b5fd2ed9a2808fde66b4171ffd9c2cef6bd3eeb8b47e6fafb23e58223a059/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d8b5fd2ed9a2808fde66b4171ffd9c2cef6bd3eeb8b47e6fafb23e58223a059/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d8b5fd2ed9a2808fde66b4171ffd9c2cef6bd3eeb8b47e6fafb23e58223a059/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-789485",
	                "Source": "/var/lib/docker/volumes/addons-789485/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-789485",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-789485",
	                "name.minikube.sigs.k8s.io": "addons-789485",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7959ffc6276c73f4c3554d95bcc6eb049e7d98dce013cdb30437f041dc12eea5",
	            "SandboxKey": "/var/run/docker/netns/7959ffc6276c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38262"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-789485": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d1704f47035633e61034be77ef171a9c9649994d26aa3620e8203241da14c986",
	                    "EndpointID": "098dfadf7dc848bbd228dbc72b6e5c52ae0b42cb6e53c8b68cf6194ddc66eda7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-789485",
	                        "32ec92182863"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-789485 -n addons-789485
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 logs -n 25: (1.956459402s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-106115   | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC |                     |
	|         | -p download-only-106115              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:55 UTC |
	| delete  | -p download-only-106115              | download-only-106115   | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:55 UTC |
	| start   | -o=json --download-only              | download-only-072642   | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC |                     |
	|         | -p download-only-072642              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:56 UTC |
	| delete  | -p download-only-072642              | download-only-072642   | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:56 UTC |
	| delete  | -p download-only-106115              | download-only-106115   | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:56 UTC |
	| delete  | -p download-only-072642              | download-only-072642   | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:56 UTC |
	| start   | --download-only -p                   | download-docker-715057 | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC |                     |
	|         | download-docker-715057               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-715057            | download-docker-715057 | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:56 UTC |
	| start   | --download-only -p                   | binary-mirror-234697   | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC |                     |
	|         | binary-mirror-234697                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44223               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-234697              | binary-mirror-234697   | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:56 UTC |
	| addons  | enable dashboard -p                  | addons-789485          | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC |                     |
	|         | addons-789485                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-789485          | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC |                     |
	|         | addons-789485                        |                        |         |         |                     |                     |
	| start   | -p addons-789485 --wait=true         | addons-789485          | jenkins | v1.33.1 | 19 Aug 24 12:56 UTC | 19 Aug 24 12:59 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:56:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:56:05.256651 4147311 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:56:05.256873 4147311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:56:05.256900 4147311 out.go:358] Setting ErrFile to fd 2...
	I0819 12:56:05.256920 4147311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:56:05.257193 4147311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 12:56:05.257719 4147311 out.go:352] Setting JSON to false
	I0819 12:56:05.258749 4147311 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":95909,"bootTime":1723976256,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 12:56:05.258864 4147311 start.go:139] virtualization:  
	I0819 12:56:05.261595 4147311 out.go:177] * [addons-789485] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 12:56:05.262991 4147311 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 12:56:05.263080 4147311 notify.go:220] Checking for updates...
	I0819 12:56:05.266859 4147311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:56:05.269164 4147311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 12:56:05.271048 4147311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 12:56:05.272364 4147311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 12:56:05.273772 4147311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:56:05.275415 4147311 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:56:05.296856 4147311 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:56:05.296990 4147311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:56:05.356073 4147311 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 12:56:05.345741128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:56:05.356194 4147311 docker.go:307] overlay module found
	I0819 12:56:05.359710 4147311 out.go:177] * Using the docker driver based on user configuration
	I0819 12:56:05.361875 4147311 start.go:297] selected driver: docker
	I0819 12:56:05.361938 4147311 start.go:901] validating driver "docker" against <nil>
	I0819 12:56:05.361956 4147311 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:56:05.362566 4147311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:56:05.418382 4147311 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 12:56:05.409240572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:56:05.418569 4147311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:56:05.418855 4147311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:56:05.420152 4147311 out.go:177] * Using Docker driver with root privileges
	I0819 12:56:05.421917 4147311 cni.go:84] Creating CNI manager for ""
	I0819 12:56:05.421947 4147311 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:56:05.421966 4147311 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 12:56:05.422054 4147311 start.go:340] cluster config:
	{Name:addons-789485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-789485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:56:05.423466 4147311 out.go:177] * Starting "addons-789485" primary control-plane node in "addons-789485" cluster
	I0819 12:56:05.424896 4147311 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 12:56:05.426403 4147311 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 12:56:05.427508 4147311 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:56:05.427560 4147311 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 12:56:05.427573 4147311 cache.go:56] Caching tarball of preloaded images
	I0819 12:56:05.427591 4147311 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 12:56:05.427652 4147311 preload.go:172] Found /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 12:56:05.427662 4147311 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 12:56:05.428134 4147311 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/config.json ...
	I0819 12:56:05.428168 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/config.json: {Name:mk1c412cd73c18cc4ba682ab161d6d55ff4b2373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:05.441700 4147311 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 12:56:05.441812 4147311 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 12:56:05.441832 4147311 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 12:56:05.441836 4147311 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 12:56:05.441844 4147311 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 12:56:05.441850 4147311 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 12:56:22.684561 4147311 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 12:56:22.684607 4147311 cache.go:194] Successfully downloaded all kic artifacts
	I0819 12:56:22.684654 4147311 start.go:360] acquireMachinesLock for addons-789485: {Name:mk22f36908f8c41c41cf88416d85459e514cd2a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:56:22.685497 4147311 start.go:364] duration metric: took 811.654µs to acquireMachinesLock for "addons-789485"
	I0819 12:56:22.685545 4147311 start.go:93] Provisioning new machine with config: &{Name:addons-789485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-789485 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 12:56:22.685633 4147311 start.go:125] createHost starting for "" (driver="docker")
	I0819 12:56:22.687107 4147311 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 12:56:22.687380 4147311 start.go:159] libmachine.API.Create for "addons-789485" (driver="docker")
	I0819 12:56:22.687413 4147311 client.go:168] LocalClient.Create starting
	I0819 12:56:22.687536 4147311 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem
	I0819 12:56:23.368694 4147311 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem
	I0819 12:56:23.787759 4147311 cli_runner.go:164] Run: docker network inspect addons-789485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 12:56:23.802124 4147311 cli_runner.go:211] docker network inspect addons-789485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 12:56:23.802219 4147311 network_create.go:284] running [docker network inspect addons-789485] to gather additional debugging logs...
	I0819 12:56:23.802242 4147311 cli_runner.go:164] Run: docker network inspect addons-789485
	W0819 12:56:23.817115 4147311 cli_runner.go:211] docker network inspect addons-789485 returned with exit code 1
	I0819 12:56:23.817148 4147311 network_create.go:287] error running [docker network inspect addons-789485]: docker network inspect addons-789485: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-789485 not found
	I0819 12:56:23.817163 4147311 network_create.go:289] output of [docker network inspect addons-789485]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-789485 not found
	
	** /stderr **
	I0819 12:56:23.817274 4147311 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 12:56:23.833311 4147311 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400177d070}
	I0819 12:56:23.833354 4147311 network_create.go:124] attempt to create docker network addons-789485 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 12:56:23.833412 4147311 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-789485 addons-789485
	I0819 12:56:23.902244 4147311 network_create.go:108] docker network addons-789485 192.168.49.0/24 created
	I0819 12:56:23.902286 4147311 kic.go:121] calculated static IP "192.168.49.2" for the "addons-789485" container
	I0819 12:56:23.902365 4147311 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 12:56:23.916064 4147311 cli_runner.go:164] Run: docker volume create addons-789485 --label name.minikube.sigs.k8s.io=addons-789485 --label created_by.minikube.sigs.k8s.io=true
	I0819 12:56:23.932186 4147311 oci.go:103] Successfully created a docker volume addons-789485
	I0819 12:56:23.932279 4147311 cli_runner.go:164] Run: docker run --rm --name addons-789485-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789485 --entrypoint /usr/bin/test -v addons-789485:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 12:56:25.930071 4147311 cli_runner.go:217] Completed: docker run --rm --name addons-789485-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789485 --entrypoint /usr/bin/test -v addons-789485:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.997742928s)
	I0819 12:56:25.930117 4147311 oci.go:107] Successfully prepared a docker volume addons-789485
	I0819 12:56:25.930136 4147311 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:56:25.930156 4147311 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 12:56:25.930258 4147311 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-789485:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 12:56:29.999149 4147311 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-789485:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.068843627s)
	I0819 12:56:29.999182 4147311 kic.go:203] duration metric: took 4.069022801s to extract preloaded images to volume ...
	W0819 12:56:29.999331 4147311 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 12:56:29.999442 4147311 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 12:56:30.119856 4147311 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-789485 --name addons-789485 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789485 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-789485 --network addons-789485 --ip 192.168.49.2 --volume addons-789485:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 12:56:30.458021 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Running}}
	I0819 12:56:30.483439 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:30.504580 4147311 cli_runner.go:164] Run: docker exec addons-789485 stat /var/lib/dpkg/alternatives/iptables
	I0819 12:56:30.568209 4147311 oci.go:144] the created container "addons-789485" has a running status.
	I0819 12:56:30.568236 4147311 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa...
	I0819 12:56:31.736056 4147311 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 12:56:31.756466 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:31.774421 4147311 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 12:56:31.774448 4147311 kic_runner.go:114] Args: [docker exec --privileged addons-789485 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 12:56:31.839904 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:31.858258 4147311 machine.go:93] provisionDockerMachine start ...
	I0819 12:56:31.858352 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:31.877330 4147311 main.go:141] libmachine: Using SSH client type: native
	I0819 12:56:31.877612 4147311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38260 <nil> <nil>}
	I0819 12:56:31.877629 4147311 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:56:32.015610 4147311 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-789485
	
	I0819 12:56:32.015638 4147311 ubuntu.go:169] provisioning hostname "addons-789485"
	I0819 12:56:32.015730 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:32.033283 4147311 main.go:141] libmachine: Using SSH client type: native
	I0819 12:56:32.033543 4147311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38260 <nil> <nil>}
	I0819 12:56:32.033560 4147311 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-789485 && echo "addons-789485" | sudo tee /etc/hostname
	I0819 12:56:32.176638 4147311 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-789485
	
	I0819 12:56:32.176789 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:32.194420 4147311 main.go:141] libmachine: Using SSH client type: native
	I0819 12:56:32.194682 4147311 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38260 <nil> <nil>}
	I0819 12:56:32.194705 4147311 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-789485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-789485/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-789485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:56:32.324099 4147311 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:56:32.324126 4147311 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19479-4141166/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-4141166/.minikube}
	I0819 12:56:32.324144 4147311 ubuntu.go:177] setting up certificates
	I0819 12:56:32.324162 4147311 provision.go:84] configureAuth start
	I0819 12:56:32.324222 4147311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789485
	I0819 12:56:32.340901 4147311 provision.go:143] copyHostCerts
	I0819 12:56:32.340994 4147311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem (1082 bytes)
	I0819 12:56:32.341128 4147311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem (1123 bytes)
	I0819 12:56:32.341203 4147311 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem (1675 bytes)
	I0819 12:56:32.341278 4147311 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem org=jenkins.addons-789485 san=[127.0.0.1 192.168.49.2 addons-789485 localhost minikube]
	I0819 12:56:32.653059 4147311 provision.go:177] copyRemoteCerts
	I0819 12:56:32.653135 4147311 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:56:32.653180 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:32.672326 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:32.765014 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:56:32.789869 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 12:56:32.814676 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 12:56:32.839491 4147311 provision.go:87] duration metric: took 515.314521ms to configureAuth
	I0819 12:56:32.839565 4147311 ubuntu.go:193] setting minikube options for container-runtime
	I0819 12:56:32.839848 4147311 config.go:182] Loaded profile config "addons-789485": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:56:32.839887 4147311 machine.go:96] duration metric: took 981.609043ms to provisionDockerMachine
	I0819 12:56:32.839910 4147311 client.go:171] duration metric: took 10.15248615s to LocalClient.Create
	I0819 12:56:32.839941 4147311 start.go:167] duration metric: took 10.152562735s to libmachine.API.Create "addons-789485"
	I0819 12:56:32.839951 4147311 start.go:293] postStartSetup for "addons-789485" (driver="docker")
	I0819 12:56:32.839969 4147311 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:56:32.840037 4147311 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:56:32.840095 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:32.859638 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:32.953180 4147311 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:56:32.956501 4147311 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 12:56:32.956539 4147311 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 12:56:32.956552 4147311 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 12:56:32.956560 4147311 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 12:56:32.956570 4147311 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/addons for local assets ...
	I0819 12:56:32.956641 4147311 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/files for local assets ...
	I0819 12:56:32.956671 4147311 start.go:296] duration metric: took 116.7074ms for postStartSetup
	I0819 12:56:32.956981 4147311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789485
	I0819 12:56:32.980096 4147311 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/config.json ...
	I0819 12:56:32.980450 4147311 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:56:32.980506 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:32.997067 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:33.089742 4147311 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 12:56:33.094938 4147311 start.go:128] duration metric: took 10.409287959s to createHost
	I0819 12:56:33.094962 4147311 start.go:83] releasing machines lock for "addons-789485", held for 10.409440917s
	I0819 12:56:33.095038 4147311 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789485
	I0819 12:56:33.114671 4147311 ssh_runner.go:195] Run: cat /version.json
	I0819 12:56:33.114724 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:33.114982 4147311 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:56:33.115017 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:33.141379 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:33.143420 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:33.354847 4147311 ssh_runner.go:195] Run: systemctl --version
	I0819 12:56:33.359162 4147311 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:56:33.363757 4147311 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 12:56:33.389338 4147311 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 12:56:33.389437 4147311 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:56:33.421646 4147311 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 12:56:33.421723 4147311 start.go:495] detecting cgroup driver to use...
	I0819 12:56:33.421772 4147311 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 12:56:33.421852 4147311 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 12:56:33.435055 4147311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 12:56:33.446460 4147311 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:56:33.446555 4147311 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:56:33.460376 4147311 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:56:33.474541 4147311 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:56:33.565203 4147311 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:56:33.657798 4147311 docker.go:233] disabling docker service ...
	I0819 12:56:33.657902 4147311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:56:33.678025 4147311 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:56:33.690092 4147311 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:56:33.790844 4147311 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:56:33.881320 4147311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:56:33.892545 4147311 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:56:33.909462 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 12:56:33.920016 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 12:56:33.930671 4147311 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 12:56:33.930738 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 12:56:33.941686 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 12:56:33.952111 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 12:56:33.961890 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 12:56:33.971743 4147311 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:56:33.981290 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 12:56:33.991713 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 12:56:34.002022 4147311 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 12:56:34.015669 4147311 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:56:34.025830 4147311 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:56:34.034994 4147311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:56:34.119974 4147311 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 12:56:34.253403 4147311 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 12:56:34.253544 4147311 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 12:56:34.257267 4147311 start.go:563] Will wait 60s for crictl version
	I0819 12:56:34.257363 4147311 ssh_runner.go:195] Run: which crictl
	I0819 12:56:34.261119 4147311 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:56:34.299080 4147311 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 12:56:34.299218 4147311 ssh_runner.go:195] Run: containerd --version
	I0819 12:56:34.320869 4147311 ssh_runner.go:195] Run: containerd --version
	I0819 12:56:34.347490 4147311 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 12:56:34.350245 4147311 cli_runner.go:164] Run: docker network inspect addons-789485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 12:56:34.366047 4147311 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 12:56:34.369920 4147311 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:56:34.381658 4147311 kubeadm.go:883] updating cluster {Name:addons-789485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-789485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:56:34.381790 4147311 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:56:34.381856 4147311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:56:34.419334 4147311 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 12:56:34.419360 4147311 containerd.go:534] Images already preloaded, skipping extraction
	I0819 12:56:34.419426 4147311 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:56:34.455413 4147311 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 12:56:34.455435 4147311 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:56:34.455444 4147311 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0819 12:56:34.455549 4147311 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-789485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-789485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:56:34.455618 4147311 ssh_runner.go:195] Run: sudo crictl info
	I0819 12:56:34.497372 4147311 cni.go:84] Creating CNI manager for ""
	I0819 12:56:34.497399 4147311 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:56:34.497409 4147311 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:56:34.497434 4147311 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-789485 NodeName:addons-789485 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:56:34.497571 4147311 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-789485"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:56:34.497646 4147311 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:56:34.506993 4147311 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:56:34.507065 4147311 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:56:34.515987 4147311 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 12:56:34.534023 4147311 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:56:34.552223 4147311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0819 12:56:34.571251 4147311 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 12:56:34.574597 4147311 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:56:34.585644 4147311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:56:34.674002 4147311 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:56:34.689865 4147311 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485 for IP: 192.168.49.2
	I0819 12:56:34.689924 4147311 certs.go:194] generating shared ca certs ...
	I0819 12:56:34.689963 4147311 certs.go:226] acquiring lock for ca certs: {Name:mkb3362db9c120e28de14409a94f066387768cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:34.690139 4147311 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key
	I0819 12:56:35.029945 4147311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt ...
	I0819 12:56:35.029979 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt: {Name:mkacde2a2bb94b0f8cb4e12b7e8814166a81a1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.030229 4147311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key ...
	I0819 12:56:35.030250 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key: {Name:mk68d7e401e0730bb37bcf4a3a1e52319e383cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.030852 4147311 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key
	I0819 12:56:35.417621 4147311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.crt ...
	I0819 12:56:35.417656 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.crt: {Name:mka122f440207cb962490f783e787f7d2eaed730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.417858 4147311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key ...
	I0819 12:56:35.417871 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key: {Name:mk79a3426335ea3060994284c8bba1d92c72ef9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.417968 4147311 certs.go:256] generating profile certs ...
	I0819 12:56:35.418034 4147311 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.key
	I0819 12:56:35.418060 4147311 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt with IP's: []
	I0819 12:56:35.700354 4147311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt ...
	I0819 12:56:35.700388 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: {Name:mkc63500d509dbdef868f08daf2eff48a4e23fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.701123 4147311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.key ...
	I0819 12:56:35.701141 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.key: {Name:mk7870c3d4aeb2c99f8c7221e9a8f5d6f0e978fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.701739 4147311 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.key.74786f64
	I0819 12:56:35.701765 4147311 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.crt.74786f64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 12:56:35.850131 4147311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.crt.74786f64 ...
	I0819 12:56:35.850163 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.crt.74786f64: {Name:mk06fb4aad9dad3e434c18249a5f8c68281969f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.850349 4147311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.key.74786f64 ...
	I0819 12:56:35.850365 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.key.74786f64: {Name:mkf6ec5000cb4cbc2b2685eccab2017ce54c4037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:35.850454 4147311 certs.go:381] copying /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.crt.74786f64 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.crt
	I0819 12:56:35.850534 4147311 certs.go:385] copying /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.key.74786f64 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.key
	I0819 12:56:35.850599 4147311 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.key
	I0819 12:56:35.850626 4147311 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.crt with IP's: []
	I0819 12:56:36.096504 4147311 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.crt ...
	I0819 12:56:36.096541 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.crt: {Name:mk35bbedfbde7aa66b1ab38f20c9456d4ed70e01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:36.096752 4147311 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.key ...
	I0819 12:56:36.096769 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.key: {Name:mk3bd53c1297d7434d9fb9116c0bcc38e6ab73d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:36.096982 4147311 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:56:36.097033 4147311 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:56:36.097060 4147311 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:56:36.097097 4147311 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem (1675 bytes)
	I0819 12:56:36.097762 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:56:36.124860 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:56:36.151884 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:56:36.176973 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 12:56:36.201223 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 12:56:36.225668 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:56:36.250183 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:56:36.275088 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 12:56:36.299538 4147311 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:56:36.327476 4147311 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:56:36.349375 4147311 ssh_runner.go:195] Run: openssl version
	I0819 12:56:36.354835 4147311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:56:36.364340 4147311 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:56:36.367824 4147311 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:56:36.367897 4147311 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:56:36.375378 4147311 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:56:36.384625 4147311 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:56:36.387839 4147311 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 12:56:36.387937 4147311 kubeadm.go:392] StartCluster: {Name:addons-789485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-789485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:56:36.388031 4147311 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 12:56:36.388091 4147311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:56:36.425061 4147311 cri.go:89] found id: ""
	I0819 12:56:36.425136 4147311 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:56:36.433975 4147311 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 12:56:36.443123 4147311 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 12:56:36.443208 4147311 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 12:56:36.452385 4147311 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 12:56:36.452406 4147311 kubeadm.go:157] found existing configuration files:
	
	I0819 12:56:36.452458 4147311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 12:56:36.461188 4147311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 12:56:36.461248 4147311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 12:56:36.469688 4147311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 12:56:36.478349 4147311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 12:56:36.478414 4147311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 12:56:36.487154 4147311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 12:56:36.498691 4147311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 12:56:36.498764 4147311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 12:56:36.507636 4147311 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 12:56:36.517216 4147311 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 12:56:36.517320 4147311 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 12:56:36.526118 4147311 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 12:56:36.573583 4147311 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 12:56:36.573774 4147311 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 12:56:36.590951 4147311 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 12:56:36.591026 4147311 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 12:56:36.591068 4147311 kubeadm.go:310] OS: Linux
	I0819 12:56:36.591120 4147311 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 12:56:36.591172 4147311 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 12:56:36.591220 4147311 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 12:56:36.591271 4147311 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 12:56:36.591319 4147311 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 12:56:36.591370 4147311 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 12:56:36.591417 4147311 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 12:56:36.591466 4147311 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 12:56:36.591514 4147311 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 12:56:36.654425 4147311 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 12:56:36.654571 4147311 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 12:56:36.654667 4147311 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 12:56:36.668130 4147311 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 12:56:36.672219 4147311 out.go:235]   - Generating certificates and keys ...
	I0819 12:56:36.672384 4147311 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 12:56:36.672461 4147311 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 12:56:37.488806 4147311 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 12:56:37.701702 4147311 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 12:56:37.930641 4147311 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 12:56:38.439947 4147311 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 12:56:38.824547 4147311 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 12:56:38.824844 4147311 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-789485 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 12:56:39.527930 4147311 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 12:56:39.528116 4147311 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-789485 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 12:56:40.250705 4147311 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 12:56:40.430798 4147311 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 12:56:40.716531 4147311 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 12:56:40.716897 4147311 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 12:56:41.251893 4147311 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 12:56:42.196826 4147311 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 12:56:42.897858 4147311 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 12:56:43.953145 4147311 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 12:56:44.704979 4147311 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 12:56:44.705926 4147311 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 12:56:44.709031 4147311 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 12:56:44.712154 4147311 out.go:235]   - Booting up control plane ...
	I0819 12:56:44.712273 4147311 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 12:56:44.712361 4147311 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 12:56:44.712890 4147311 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 12:56:44.725690 4147311 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 12:56:44.732532 4147311 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 12:56:44.732600 4147311 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 12:56:44.834077 4147311 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 12:56:44.834219 4147311 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 12:56:46.836021 4147311 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001811196s
	I0819 12:56:46.836119 4147311 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 12:56:52.837714 4147311 kubeadm.go:310] [api-check] The API server is healthy after 6.001683934s
	I0819 12:56:52.859076 4147311 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 12:56:52.876742 4147311 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 12:56:52.903964 4147311 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 12:56:52.904173 4147311 kubeadm.go:310] [mark-control-plane] Marking the node addons-789485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 12:56:52.915520 4147311 kubeadm.go:310] [bootstrap-token] Using token: jbbh95.2qimbpzrak6dpefg
	I0819 12:56:52.918336 4147311 out.go:235]   - Configuring RBAC rules ...
	I0819 12:56:52.918498 4147311 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 12:56:52.926323 4147311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 12:56:52.941310 4147311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 12:56:52.947039 4147311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 12:56:52.952981 4147311 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 12:56:52.958631 4147311 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 12:56:53.245467 4147311 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 12:56:53.684424 4147311 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 12:56:54.245299 4147311 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 12:56:54.246504 4147311 kubeadm.go:310] 
	I0819 12:56:54.246596 4147311 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 12:56:54.246603 4147311 kubeadm.go:310] 
	I0819 12:56:54.246677 4147311 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 12:56:54.246681 4147311 kubeadm.go:310] 
	I0819 12:56:54.246706 4147311 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 12:56:54.246762 4147311 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 12:56:54.246811 4147311 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 12:56:54.246816 4147311 kubeadm.go:310] 
	I0819 12:56:54.246868 4147311 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 12:56:54.246872 4147311 kubeadm.go:310] 
	I0819 12:56:54.246918 4147311 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 12:56:54.246922 4147311 kubeadm.go:310] 
	I0819 12:56:54.246972 4147311 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 12:56:54.247044 4147311 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 12:56:54.247109 4147311 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 12:56:54.247114 4147311 kubeadm.go:310] 
	I0819 12:56:54.247194 4147311 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 12:56:54.247268 4147311 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 12:56:54.247273 4147311 kubeadm.go:310] 
	I0819 12:56:54.247354 4147311 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jbbh95.2qimbpzrak6dpefg \
	I0819 12:56:54.247453 4147311 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:526be1a16141ea4231f47bdfd207f2f21320af5d9aae23337e8717d344429352 \
	I0819 12:56:54.247473 4147311 kubeadm.go:310] 	--control-plane 
	I0819 12:56:54.247478 4147311 kubeadm.go:310] 
	I0819 12:56:54.247559 4147311 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 12:56:54.247569 4147311 kubeadm.go:310] 
	I0819 12:56:54.247648 4147311 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jbbh95.2qimbpzrak6dpefg \
	I0819 12:56:54.247748 4147311 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:526be1a16141ea4231f47bdfd207f2f21320af5d9aae23337e8717d344429352 
	I0819 12:56:54.251907 4147311 kubeadm.go:310] W0819 12:56:36.568715    1021 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:56:54.252207 4147311 kubeadm.go:310] W0819 12:56:36.570159    1021 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 12:56:54.252430 4147311 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 12:56:54.252537 4147311 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 12:56:54.252556 4147311 cni.go:84] Creating CNI manager for ""
	I0819 12:56:54.252572 4147311 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:56:54.261230 4147311 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 12:56:54.267116 4147311 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 12:56:54.271379 4147311 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 12:56:54.271449 4147311 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 12:56:54.296771 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 12:56:54.586120 4147311 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 12:56:54.586256 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:54.586390 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-789485 minikube.k8s.io/updated_at=2024_08_19T12_56_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=addons-789485 minikube.k8s.io/primary=true
	I0819 12:56:54.718741 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:54.718802 4147311 ops.go:34] apiserver oom_adj: -16
	I0819 12:56:55.218884 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:55.718888 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:56.219721 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:56.718969 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:57.218897 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:57.719688 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:58.219057 4147311 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 12:56:58.306847 4147311 kubeadm.go:1113] duration metric: took 3.720636929s to wait for elevateKubeSystemPrivileges
	I0819 12:56:58.306875 4147311 kubeadm.go:394] duration metric: took 21.918943388s to StartCluster
	I0819 12:56:58.306893 4147311 settings.go:142] acquiring lock: {Name:mkaa4019b166703efd95aaa3737397f414197f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:58.307025 4147311 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 12:56:58.307417 4147311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/kubeconfig: {Name:mk7b0eea2060f71726f692d0256a33fdf7565e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:56:58.307613 4147311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 12:56:58.307639 4147311 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 12:56:58.307920 4147311 config.go:182] Loaded profile config "addons-789485": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:56:58.307955 4147311 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 12:56:58.308051 4147311 addons.go:69] Setting yakd=true in profile "addons-789485"
	I0819 12:56:58.308071 4147311 addons.go:234] Setting addon yakd=true in "addons-789485"
	I0819 12:56:58.308096 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.308546 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.309054 4147311 addons.go:69] Setting metrics-server=true in profile "addons-789485"
	I0819 12:56:58.309093 4147311 addons.go:234] Setting addon metrics-server=true in "addons-789485"
	I0819 12:56:58.309123 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.309135 4147311 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-789485"
	I0819 12:56:58.309159 4147311 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-789485"
	I0819 12:56:58.309182 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.309541 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.309603 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.312835 4147311 addons.go:69] Setting cloud-spanner=true in profile "addons-789485"
	I0819 12:56:58.312876 4147311 addons.go:234] Setting addon cloud-spanner=true in "addons-789485"
	I0819 12:56:58.312919 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.313361 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.314598 4147311 addons.go:69] Setting registry=true in profile "addons-789485"
	I0819 12:56:58.314691 4147311 addons.go:234] Setting addon registry=true in "addons-789485"
	I0819 12:56:58.314756 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.315290 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.316011 4147311 addons.go:69] Setting storage-provisioner=true in profile "addons-789485"
	I0819 12:56:58.316053 4147311 addons.go:234] Setting addon storage-provisioner=true in "addons-789485"
	I0819 12:56:58.316082 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.316516 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.325430 4147311 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-789485"
	I0819 12:56:58.325607 4147311 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-789485"
	I0819 12:56:58.328046 4147311 addons.go:69] Setting volcano=true in profile "addons-789485"
	I0819 12:56:58.328121 4147311 addons.go:234] Setting addon volcano=true in "addons-789485"
	I0819 12:56:58.328416 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.335416 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.325653 4147311 out.go:177] * Verifying Kubernetes components...
	I0819 12:56:58.328245 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.367194 4147311 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:56:58.325515 4147311 addons.go:69] Setting default-storageclass=true in profile "addons-789485"
	I0819 12:56:58.369787 4147311 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-789485"
	I0819 12:56:58.370289 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.325525 4147311 addons.go:69] Setting gcp-auth=true in profile "addons-789485"
	I0819 12:56:58.385456 4147311 mustload.go:65] Loading cluster: addons-789485
	I0819 12:56:58.385695 4147311 config.go:182] Loaded profile config "addons-789485": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:56:58.386070 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.325529 4147311 addons.go:69] Setting ingress=true in profile "addons-789485"
	I0819 12:56:58.416840 4147311 addons.go:234] Setting addon ingress=true in "addons-789485"
	I0819 12:56:58.416927 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.417423 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.325533 4147311 addons.go:69] Setting ingress-dns=true in profile "addons-789485"
	I0819 12:56:58.431660 4147311 addons.go:234] Setting addon ingress-dns=true in "addons-789485"
	I0819 12:56:58.431719 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.436340 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.437800 4147311 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 12:56:58.440852 4147311 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 12:56:58.440918 4147311 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 12:56:58.441032 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.456194 4147311 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 12:56:58.325536 4147311 addons.go:69] Setting inspektor-gadget=true in profile "addons-789485"
	I0819 12:56:58.457416 4147311 addons.go:234] Setting addon inspektor-gadget=true in "addons-789485"
	I0819 12:56:58.457463 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.457917 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.459046 4147311 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 12:56:58.459098 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 12:56:58.459187 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.472653 4147311 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 12:56:58.475948 4147311 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 12:56:58.480510 4147311 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 12:56:58.480624 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.325508 4147311 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-789485"
	I0819 12:56:58.488493 4147311 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-789485"
	I0819 12:56:58.488538 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.489002 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.328311 4147311 addons.go:69] Setting volumesnapshots=true in profile "addons-789485"
	I0819 12:56:58.498777 4147311 addons.go:234] Setting addon volumesnapshots=true in "addons-789485"
	I0819 12:56:58.498821 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.499296 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.510544 4147311 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 12:56:58.516066 4147311 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 12:56:58.527127 4147311 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 12:56:58.527282 4147311 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:56:58.530258 4147311 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 12:56:58.530288 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 12:56:58.530359 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.539167 4147311 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:56:58.539234 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:56:58.539352 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.539665 4147311 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 12:56:58.539703 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 12:56:58.539764 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.564104 4147311 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-789485"
	I0819 12:56:58.564143 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.564568 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.582739 4147311 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0819 12:56:58.592065 4147311 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0819 12:56:58.594899 4147311 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0819 12:56:58.595370 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.601955 4147311 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 12:56:58.602064 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0819 12:56:58.602182 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.617250 4147311 addons.go:234] Setting addon default-storageclass=true in "addons-789485"
	I0819 12:56:58.617369 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:56:58.618063 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:56:58.656921 4147311 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 12:56:58.659680 4147311 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 12:56:58.659704 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 12:56:58.659777 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.666469 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.683889 4147311 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 12:56:58.688209 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 12:56:58.688238 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 12:56:58.688311 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.695858 4147311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 12:56:58.699551 4147311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 12:56:58.702274 4147311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 12:56:58.708003 4147311 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 12:56:58.708072 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 12:56:58.708174 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.741071 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.747905 4147311 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 12:56:58.748188 4147311 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:56:58.751268 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 12:56:58.755098 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 12:56:58.758604 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 12:56:58.763111 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 12:56:58.766397 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 12:56:58.767497 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.773731 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 12:56:58.777674 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 12:56:58.780975 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 12:56:58.784210 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 12:56:58.784239 4147311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 12:56:58.784315 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.795910 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.799899 4147311 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 12:56:58.803643 4147311 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 12:56:58.803667 4147311 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 12:56:58.803738 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.819820 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.820547 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.887927 4147311 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 12:56:58.895674 4147311 out.go:177]   - Using image docker.io/busybox:stable
	I0819 12:56:58.900009 4147311 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 12:56:58.900039 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 12:56:58.900113 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.920212 4147311 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:56:58.920234 4147311 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:56:58.920300 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:56:58.930048 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.930918 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.931481 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.934931 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.974614 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.983896 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	W0819 12:56:58.988244 4147311 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 12:56:58.988338 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:58.988906 4147311 retry.go:31] will retry after 180.109029ms: ssh: handshake failed: EOF
	W0819 12:56:58.988982 4147311 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 12:56:58.988994 4147311 retry.go:31] will retry after 314.006879ms: ssh: handshake failed: EOF
	I0819 12:56:59.001871 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:56:59.280842 4147311 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 12:56:59.280942 4147311 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 12:56:59.638127 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 12:56:59.675558 4147311 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 12:56:59.675583 4147311 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 12:56:59.693548 4147311 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 12:56:59.693572 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 12:56:59.698609 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 12:56:59.701748 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:56:59.765137 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 12:56:59.795007 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 12:56:59.811005 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:56:59.871971 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 12:56:59.927832 4147311 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 12:56:59.927863 4147311 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 12:56:59.941747 4147311 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 12:56:59.941784 4147311 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 12:56:59.983055 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 12:56:59.986560 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 12:56:59.986590 4147311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 12:57:00.022115 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 12:57:00.022148 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 12:57:00.179894 4147311 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 12:57:00.179927 4147311 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 12:57:00.248378 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 12:57:00.248411 4147311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 12:57:00.311771 4147311 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 12:57:00.311963 4147311 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 12:57:00.353026 4147311 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 12:57:00.353051 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 12:57:00.419674 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 12:57:00.419708 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 12:57:00.575818 4147311 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 12:57:00.575848 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 12:57:00.671749 4147311 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 12:57:00.671775 4147311 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 12:57:00.708140 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 12:57:00.791933 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 12:57:00.791960 4147311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 12:57:00.804253 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 12:57:00.879206 4147311 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:57:00.879233 4147311 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 12:57:00.907018 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 12:57:00.907044 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 12:57:00.978539 4147311 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 12:57:00.978566 4147311 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 12:57:01.152403 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 12:57:01.152441 4147311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 12:57:01.277423 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:57:01.298791 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 12:57:01.298816 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 12:57:01.300330 4147311 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.55211071s)
	I0819 12:57:01.300425 4147311 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.552497316s)
	I0819 12:57:01.300452 4147311 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 12:57:01.302255 4147311 node_ready.go:35] waiting up to 6m0s for node "addons-789485" to be "Ready" ...
	I0819 12:57:01.318695 4147311 node_ready.go:49] node "addons-789485" has status "Ready":"True"
	I0819 12:57:01.318725 4147311 node_ready.go:38] duration metric: took 16.435105ms for node "addons-789485" to be "Ready" ...
	I0819 12:57:01.318738 4147311 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:57:01.333690 4147311 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4wgkm" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:01.450071 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 12:57:01.450097 4147311 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 12:57:01.563570 4147311 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 12:57:01.563598 4147311 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 12:57:01.754878 4147311 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 12:57:01.754905 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 12:57:01.809072 4147311 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-789485" context rescaled to 1 replicas
	I0819 12:57:01.826887 4147311 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 12:57:01.826912 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 12:57:01.836659 4147311 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-4wgkm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-4wgkm" not found
	I0819 12:57:01.836689 4147311 pod_ready.go:82] duration metric: took 502.963939ms for pod "coredns-6f6b679f8f-4wgkm" in "kube-system" namespace to be "Ready" ...
	E0819 12:57:01.836701 4147311 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-4wgkm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-4wgkm" not found
	I0819 12:57:01.836735 4147311 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:01.915584 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 12:57:01.915610 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 12:57:02.017306 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 12:57:02.048054 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.409884034s)
	I0819 12:57:02.151815 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 12:57:02.151842 4147311 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 12:57:02.158641 4147311 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 12:57:02.158668 4147311 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 12:57:02.374633 4147311 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 12:57:02.374658 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 12:57:02.449214 4147311 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 12:57:02.449241 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 12:57:02.705926 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 12:57:02.768881 4147311 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 12:57:02.768915 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 12:57:03.063110 4147311 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 12:57:03.063139 4147311 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 12:57:03.327223 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 12:57:03.872805 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:05.808517 4147311 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 12:57:05.808629 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:57:05.831275 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:57:06.428611 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:06.675738 4147311 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 12:57:06.831553 4147311 addons.go:234] Setting addon gcp-auth=true in "addons-789485"
	I0819 12:57:06.831619 4147311 host.go:66] Checking if "addons-789485" exists ...
	I0819 12:57:06.832147 4147311 cli_runner.go:164] Run: docker container inspect addons-789485 --format={{.State.Status}}
	I0819 12:57:06.864104 4147311 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 12:57:06.864165 4147311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789485
	I0819 12:57:06.892952 4147311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38260 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/addons-789485/id_rsa Username:docker}
	I0819 12:57:08.863921 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:09.298585 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.596803843s)
	I0819 12:57:09.298809 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.533648708s)
	I0819 12:57:09.298915 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.503884664s)
	I0819 12:57:09.298984 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.487959661s)
	I0819 12:57:09.299019 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.427026506s)
	I0819 12:57:09.299123 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.316041121s)
	I0819 12:57:09.299137 4147311 addons.go:475] Verifying addon ingress=true in "addons-789485"
	I0819 12:57:09.299329 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.59116101s)
	I0819 12:57:09.299348 4147311 addons.go:475] Verifying addon registry=true in "addons-789485"
	I0819 12:57:09.299410 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.600777876s)
	I0819 12:57:09.299455 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.495174696s)
	I0819 12:57:09.299881 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.28254249s)
	W0819 12:57:09.299914 4147311 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 12:57:09.299931 4147311 retry.go:31] will retry after 353.509262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 12:57:09.300020 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.594065706s)
	I0819 12:57:09.299746 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.022291647s)
	I0819 12:57:09.300098 4147311 addons.go:475] Verifying addon metrics-server=true in "addons-789485"
	I0819 12:57:09.301631 4147311 out.go:177] * Verifying ingress addon...
	I0819 12:57:09.301631 4147311 out.go:177] * Verifying registry addon...
	I0819 12:57:09.301720 4147311 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-789485 service yakd-dashboard -n yakd-dashboard
	
	I0819 12:57:09.303915 4147311 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 12:57:09.303963 4147311 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 12:57:09.369393 4147311 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 12:57:09.370069 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:09.370038 4147311 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 12:57:09.370127 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0819 12:57:09.382895 4147311 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 12:57:09.653652 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 12:57:09.822357 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:09.825704 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:10.066299 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.739025045s)
	I0819 12:57:10.066441 4147311 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.202308499s)
	I0819 12:57:10.066479 4147311 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-789485"
	I0819 12:57:10.069696 4147311 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 12:57:10.069712 4147311 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 12:57:10.073222 4147311 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 12:57:10.074207 4147311 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 12:57:10.076308 4147311 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 12:57:10.076336 4147311 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 12:57:10.079197 4147311 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 12:57:10.079270 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:10.127665 4147311 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 12:57:10.127743 4147311 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 12:57:10.189257 4147311 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 12:57:10.189331 4147311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 12:57:10.212311 4147311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 12:57:10.314873 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:10.315612 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:10.594225 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:10.810012 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:10.811160 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:11.081997 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:11.269178 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.615456206s)
	I0819 12:57:11.310289 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:11.310889 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:11.357455 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:11.479312 4147311 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.266966117s)
	I0819 12:57:11.482829 4147311 addons.go:475] Verifying addon gcp-auth=true in "addons-789485"
	I0819 12:57:11.485873 4147311 out.go:177] * Verifying gcp-auth addon...
	I0819 12:57:11.489638 4147311 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 12:57:11.498262 4147311 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 12:57:11.579122 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:11.810329 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:11.811341 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:12.080420 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:12.308381 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:12.309715 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:12.596723 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:12.811173 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:12.812270 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:13.079707 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:13.310174 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:13.311157 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:13.374580 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:13.579428 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:13.811069 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:13.811658 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:14.079821 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:14.309709 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:14.310915 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:14.580217 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:14.809505 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:14.812380 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:15.100572 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:15.309972 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:15.311296 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:15.596332 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:15.810255 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:15.811573 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:15.850078 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:16.099552 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:16.309293 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:16.309837 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:16.579069 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:16.812304 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:16.813604 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:17.081804 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:17.309087 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:17.310033 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:17.578975 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:17.817150 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:17.818898 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:17.880962 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:18.082014 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:18.308449 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:18.309385 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:18.598812 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:18.809148 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:18.809852 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:19.080468 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:19.309474 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:19.310460 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:19.579903 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:19.807751 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:19.809295 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:20.079998 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:20.309189 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:20.310163 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:20.352940 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:20.579618 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:20.810258 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:20.810921 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:21.080606 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:21.310743 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:21.312173 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:21.578810 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:21.810096 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:21.812366 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:22.080290 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:22.309384 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:22.309974 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:22.580107 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:22.810995 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:22.812484 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:22.844898 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:23.080261 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:23.310238 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:23.311224 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:23.579934 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:23.810511 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:23.812569 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:24.097747 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:24.309445 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:24.310393 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:24.582679 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:24.809912 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:24.810446 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:25.079208 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:25.308590 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:25.309518 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:25.343866 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:25.580554 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:25.808804 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:25.810341 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:26.084411 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:26.309052 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:26.310397 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:26.597163 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:26.814190 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:26.816157 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:27.098175 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:27.310624 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:27.312032 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:27.579776 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:27.808949 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:27.809579 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:27.842996 4147311 pod_ready.go:103] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"False"
	I0819 12:57:28.085545 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:28.323334 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:28.324358 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:28.344865 4147311 pod_ready.go:93] pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace has status "Ready":"True"
	I0819 12:57:28.344893 4147311 pod_ready.go:82] duration metric: took 26.50814451s for pod "coredns-6f6b679f8f-jdpwj" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.344913 4147311 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.358110 4147311 pod_ready.go:93] pod "etcd-addons-789485" in "kube-system" namespace has status "Ready":"True"
	I0819 12:57:28.358143 4147311 pod_ready.go:82] duration metric: took 13.220451ms for pod "etcd-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.358160 4147311 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.370872 4147311 pod_ready.go:93] pod "kube-apiserver-addons-789485" in "kube-system" namespace has status "Ready":"True"
	I0819 12:57:28.370901 4147311 pod_ready.go:82] duration metric: took 12.731823ms for pod "kube-apiserver-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.370915 4147311 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.403242 4147311 pod_ready.go:93] pod "kube-controller-manager-addons-789485" in "kube-system" namespace has status "Ready":"True"
	I0819 12:57:28.403274 4147311 pod_ready.go:82] duration metric: took 32.348704ms for pod "kube-controller-manager-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.403287 4147311 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ctgc7" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.421546 4147311 pod_ready.go:93] pod "kube-proxy-ctgc7" in "kube-system" namespace has status "Ready":"True"
	I0819 12:57:28.421574 4147311 pod_ready.go:82] duration metric: took 18.278405ms for pod "kube-proxy-ctgc7" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.421585 4147311 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.579928 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:28.740507 4147311 pod_ready.go:93] pod "kube-scheduler-addons-789485" in "kube-system" namespace has status "Ready":"True"
	I0819 12:57:28.740578 4147311 pod_ready.go:82] duration metric: took 318.983766ms for pod "kube-scheduler-addons-789485" in "kube-system" namespace to be "Ready" ...
	I0819 12:57:28.740602 4147311 pod_ready.go:39] duration metric: took 27.421814604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:57:28.740645 4147311 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:57:28.740746 4147311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:57:28.756624 4147311 api_server.go:72] duration metric: took 30.448939316s to wait for apiserver process to appear ...
	I0819 12:57:28.756686 4147311 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:57:28.756729 4147311 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 12:57:28.765029 4147311 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 12:57:28.766140 4147311 api_server.go:141] control plane version: v1.31.0
	I0819 12:57:28.766201 4147311 api_server.go:131] duration metric: took 9.483685ms to wait for apiserver health ...
	I0819 12:57:28.766225 4147311 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 12:57:28.809516 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:28.810169 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:28.949621 4147311 system_pods.go:59] 18 kube-system pods found
	I0819 12:57:28.949662 4147311 system_pods.go:61] "coredns-6f6b679f8f-jdpwj" [cd5209f1-b974-4edb-b1ee-d21f13c96c5a] Running
	I0819 12:57:28.949673 4147311 system_pods.go:61] "csi-hostpath-attacher-0" [edb6388b-ae0a-44fd-85a4-e200a65140bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 12:57:28.949681 4147311 system_pods.go:61] "csi-hostpath-resizer-0" [0e2ed56f-8587-4ffb-9a12-bb6ec4c0957e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 12:57:28.949689 4147311 system_pods.go:61] "csi-hostpathplugin-f28dh" [24b3cbd7-7678-4049-97ff-bbac2dc106a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 12:57:28.949699 4147311 system_pods.go:61] "etcd-addons-789485" [7cb33f05-d566-46f3-82aa-6c431894d61b] Running
	I0819 12:57:28.949704 4147311 system_pods.go:61] "kindnet-vxhfm" [9aff7f53-ed19-410f-8af1-e825df4677a7] Running
	I0819 12:57:28.949714 4147311 system_pods.go:61] "kube-apiserver-addons-789485" [1ad053eb-1f25-401e-b535-9f0fdf3aabff] Running
	I0819 12:57:28.949718 4147311 system_pods.go:61] "kube-controller-manager-addons-789485" [f7ab4b73-5a51-4064-b6e3-c42b868f9922] Running
	I0819 12:57:28.949723 4147311 system_pods.go:61] "kube-ingress-dns-minikube" [d980b3af-c7bc-41c1-9ce9-efbda3bd6a46] Running
	I0819 12:57:28.949733 4147311 system_pods.go:61] "kube-proxy-ctgc7" [225cacb4-1039-47d0-bbe9-6ac132bcadd6] Running
	I0819 12:57:28.949737 4147311 system_pods.go:61] "kube-scheduler-addons-789485" [0f1ba7bd-234c-46d0-a54b-ef554b13acd8] Running
	I0819 12:57:28.949745 4147311 system_pods.go:61] "metrics-server-8988944d9-7576p" [d98f94a6-9145-451d-9c58-60ff2d0a603d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 12:57:28.949753 4147311 system_pods.go:61] "nvidia-device-plugin-daemonset-2l8sd" [70892ae8-95d3-48c7-b918-33a39d71c08b] Running
	I0819 12:57:28.949758 4147311 system_pods.go:61] "registry-6fb4cdfc84-gtgx5" [d1858f78-2020-413b-b3d6-e5957d671bc6] Running
	I0819 12:57:28.949765 4147311 system_pods.go:61] "registry-proxy-2kb7c" [19d924e8-1aab-4715-8b17-070f1796dd3f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 12:57:28.949780 4147311 system_pods.go:61] "snapshot-controller-56fcc65765-67nb6" [8d8d213a-307c-430d-9ad1-3e59cbf1e084] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 12:57:28.949787 4147311 system_pods.go:61] "snapshot-controller-56fcc65765-8ch45" [2dc7d799-8c77-49ff-ba01-222670e11ddb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 12:57:28.949792 4147311 system_pods.go:61] "storage-provisioner" [f10e4f8c-76bc-4c9b-9bf1-75dc4a2d68e6] Running
	I0819 12:57:28.949802 4147311 system_pods.go:74] duration metric: took 183.557136ms to wait for pod list to return data ...
	I0819 12:57:28.949816 4147311 default_sa.go:34] waiting for default service account to be created ...
	I0819 12:57:29.079398 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:29.141013 4147311 default_sa.go:45] found service account: "default"
	I0819 12:57:29.141040 4147311 default_sa.go:55] duration metric: took 191.217392ms for default service account to be created ...
	I0819 12:57:29.141051 4147311 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 12:57:29.309599 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:29.309716 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:29.349798 4147311 system_pods.go:86] 18 kube-system pods found
	I0819 12:57:29.349841 4147311 system_pods.go:89] "coredns-6f6b679f8f-jdpwj" [cd5209f1-b974-4edb-b1ee-d21f13c96c5a] Running
	I0819 12:57:29.349854 4147311 system_pods.go:89] "csi-hostpath-attacher-0" [edb6388b-ae0a-44fd-85a4-e200a65140bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 12:57:29.349862 4147311 system_pods.go:89] "csi-hostpath-resizer-0" [0e2ed56f-8587-4ffb-9a12-bb6ec4c0957e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 12:57:29.349872 4147311 system_pods.go:89] "csi-hostpathplugin-f28dh" [24b3cbd7-7678-4049-97ff-bbac2dc106a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 12:57:29.349876 4147311 system_pods.go:89] "etcd-addons-789485" [7cb33f05-d566-46f3-82aa-6c431894d61b] Running
	I0819 12:57:29.349882 4147311 system_pods.go:89] "kindnet-vxhfm" [9aff7f53-ed19-410f-8af1-e825df4677a7] Running
	I0819 12:57:29.349889 4147311 system_pods.go:89] "kube-apiserver-addons-789485" [1ad053eb-1f25-401e-b535-9f0fdf3aabff] Running
	I0819 12:57:29.349894 4147311 system_pods.go:89] "kube-controller-manager-addons-789485" [f7ab4b73-5a51-4064-b6e3-c42b868f9922] Running
	I0819 12:57:29.349900 4147311 system_pods.go:89] "kube-ingress-dns-minikube" [d980b3af-c7bc-41c1-9ce9-efbda3bd6a46] Running
	I0819 12:57:29.349911 4147311 system_pods.go:89] "kube-proxy-ctgc7" [225cacb4-1039-47d0-bbe9-6ac132bcadd6] Running
	I0819 12:57:29.349916 4147311 system_pods.go:89] "kube-scheduler-addons-789485" [0f1ba7bd-234c-46d0-a54b-ef554b13acd8] Running
	I0819 12:57:29.349925 4147311 system_pods.go:89] "metrics-server-8988944d9-7576p" [d98f94a6-9145-451d-9c58-60ff2d0a603d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 12:57:29.349930 4147311 system_pods.go:89] "nvidia-device-plugin-daemonset-2l8sd" [70892ae8-95d3-48c7-b918-33a39d71c08b] Running
	I0819 12:57:29.349946 4147311 system_pods.go:89] "registry-6fb4cdfc84-gtgx5" [d1858f78-2020-413b-b3d6-e5957d671bc6] Running
	I0819 12:57:29.349953 4147311 system_pods.go:89] "registry-proxy-2kb7c" [19d924e8-1aab-4715-8b17-070f1796dd3f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 12:57:29.349961 4147311 system_pods.go:89] "snapshot-controller-56fcc65765-67nb6" [8d8d213a-307c-430d-9ad1-3e59cbf1e084] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 12:57:29.349971 4147311 system_pods.go:89] "snapshot-controller-56fcc65765-8ch45" [2dc7d799-8c77-49ff-ba01-222670e11ddb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 12:57:29.349975 4147311 system_pods.go:89] "storage-provisioner" [f10e4f8c-76bc-4c9b-9bf1-75dc4a2d68e6] Running
	I0819 12:57:29.349983 4147311 system_pods.go:126] duration metric: took 208.926719ms to wait for k8s-apps to be running ...
	I0819 12:57:29.350004 4147311 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 12:57:29.350066 4147311 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:57:29.365326 4147311 system_svc.go:56] duration metric: took 15.283894ms WaitForService to wait for kubelet
	I0819 12:57:29.365404 4147311 kubeadm.go:582] duration metric: took 31.057736926s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:57:29.365442 4147311 node_conditions.go:102] verifying NodePressure condition ...
	I0819 12:57:29.542355 4147311 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 12:57:29.542443 4147311 node_conditions.go:123] node cpu capacity is 2
	I0819 12:57:29.542466 4147311 node_conditions.go:105] duration metric: took 176.985454ms to run NodePressure ...
	I0819 12:57:29.542480 4147311 start.go:241] waiting for startup goroutines ...
	I0819 12:57:29.579120 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:29.808192 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:29.809282 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 12:57:30.080709 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:30.308357 4147311 kapi.go:107] duration metric: took 21.004441135s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 12:57:30.309649 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:30.579166 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:30.809236 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:31.096414 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:31.309342 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:31.580303 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:31.808188 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:32.079436 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:32.310057 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:32.578619 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:32.809271 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:33.096512 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:33.309645 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:33.581557 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:33.809509 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:34.096875 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:34.309525 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:34.579587 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:34.809248 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:35.093889 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:35.313503 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:35.579553 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:35.809072 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:36.081154 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:36.309164 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:36.579409 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:36.809297 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:37.080248 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:37.309544 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:37.581152 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:37.810381 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:38.080779 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:38.309559 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:38.579099 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:38.809194 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:39.079316 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:39.308796 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:39.579635 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:39.808412 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:40.081854 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:40.317718 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:40.579263 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:40.810301 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:41.082486 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:41.309196 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:41.583660 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:41.808988 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:42.087907 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:42.312069 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:42.596996 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:42.809690 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:43.080934 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:43.311393 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:43.578693 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:43.808840 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:44.081368 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:44.309269 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:44.597860 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:44.808280 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:45.085319 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:45.310762 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:45.598990 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:45.809468 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:46.080615 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:46.309117 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:46.579226 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:46.808875 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:47.079244 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:47.308418 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:47.579220 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:47.809564 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:48.097764 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:48.308944 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:48.579330 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:48.808636 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:49.079880 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:49.310182 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:49.580033 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:49.810345 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:50.079274 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:50.308729 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:50.578667 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:50.808763 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:51.079844 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:51.308121 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:51.580362 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:51.809051 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:52.079529 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:52.309326 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:52.578771 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:52.808758 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:53.078588 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:53.309029 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:53.581217 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:53.808338 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:54.079991 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:54.310014 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:54.579021 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:54.809217 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:55.079567 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:55.313997 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:55.579217 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:55.808591 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:56.079321 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:56.311197 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:56.579912 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:56.808478 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:57.095330 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:57.316681 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:57.579502 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:57.809314 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:58.081089 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:58.310181 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:58.578859 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:58.809340 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:59.078927 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 12:57:59.309375 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:57:59.578892 4147311 kapi.go:107] duration metric: took 49.504682007s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 12:57:59.809752 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:00.353718 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:00.808759 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:01.313760 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:01.809213 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:02.309219 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:02.809260 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:03.308586 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:03.807923 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:04.308570 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:04.809482 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:05.309126 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:05.808789 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:06.309058 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:06.808446 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:07.309850 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:07.807921 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:08.308782 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:08.809059 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:09.307889 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:09.808538 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:10.309079 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:10.808308 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:11.308073 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:11.808855 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:12.309707 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:12.808598 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:13.309485 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:13.809566 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:14.308474 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:14.808740 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:15.309144 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:15.808601 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:16.309801 4147311 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 12:58:16.808778 4147311 kapi.go:107] duration metric: took 1m7.504807991s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 12:58:34.996028 4147311 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 12:58:34.996059 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:35.493239 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:35.993767 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:36.493737 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:36.993303 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:37.492876 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:37.994154 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:38.494410 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:38.993541 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:39.493807 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:39.993875 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:40.494300 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:40.993347 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:41.492839 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:41.993057 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:42.494134 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:42.993024 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:43.494252 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:43.993845 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:44.492961 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:44.993005 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:45.493648 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:45.993843 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:46.494154 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:46.996214 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:47.493107 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:47.993223 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:48.492906 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:48.993780 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:49.493528 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:49.993171 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:50.493647 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:50.993755 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:51.493588 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:51.993601 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:52.493187 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:52.993775 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:53.493274 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:53.994090 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:54.493181 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:54.992804 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:55.493262 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:55.994122 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:56.493092 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:56.993318 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:57.495649 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:57.994243 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:58.493351 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:58.992959 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:59.493501 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:58:59.993237 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:00.499989 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:00.993687 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:01.492895 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:01.993961 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:02.493297 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:02.993867 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:03.493447 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:03.993219 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:04.494243 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:04.992972 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:05.494305 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:05.993498 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:06.494243 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:06.992803 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:07.493405 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:07.993708 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:08.492836 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:08.993839 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:09.493573 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:09.993221 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:10.494418 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:10.993389 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:11.492943 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:11.993761 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:12.493989 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:12.994098 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:13.493738 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:13.993433 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:14.511483 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:14.993125 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:15.493300 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:15.993389 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:16.493442 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:16.993127 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:17.493557 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:17.993065 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:18.494469 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:18.993513 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:19.493700 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:19.993910 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:20.494108 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:20.993643 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:21.492854 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:21.994148 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:22.493039 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:22.994160 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:23.493331 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:23.993026 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:24.495353 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:24.993469 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:25.493377 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:25.994588 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:26.493305 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:26.993429 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:27.492882 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:27.993797 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:28.493910 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:28.993437 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:29.492979 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:29.993634 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:30.493543 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:30.993012 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:31.494043 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:31.992924 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:32.493317 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:32.993330 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:33.493478 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:33.993511 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:34.494025 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:34.992936 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:35.492923 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:35.994045 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:36.493202 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:36.993736 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:37.493562 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:37.996593 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:38.492945 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:38.993729 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:39.493848 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:39.993890 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:40.494438 4147311 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 12:59:40.994280 4147311 kapi.go:107] duration metric: took 2m29.504637227s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 12:59:40.995771 4147311 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-789485 cluster.
	I0819 12:59:40.997547 4147311 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 12:59:40.998784 4147311 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 12:59:41.000316 4147311 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, volcano, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 12:59:41.001998 4147311 addons.go:510] duration metric: took 2m42.694029863s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner volcano inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 12:59:41.002058 4147311 start.go:246] waiting for cluster config update ...
	I0819 12:59:41.002080 4147311 start.go:255] writing updated cluster config ...
	I0819 12:59:41.002422 4147311 ssh_runner.go:195] Run: rm -f paused
	I0819 12:59:41.390339 4147311 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 12:59:41.393405 4147311 out.go:177] * Done! kubectl is now configured to use "addons-789485" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	6a2e16014ca7d       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   b3752430a738a       gadget-f7455
	f080d54401f33       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   120db5598a0a4       gcp-auth-89d5ffd79-r6rmq
	ee32d5d2c4d10       8b46b1cd48760       4 minutes ago       Running             admission                                0                   6106cf77b4638       volcano-admission-77d7d48b68-wvwhd
	af4d6cbf4cd6e       289a818c8d9c5       4 minutes ago       Running             controller                               0                   5cd47c41a3c5e       ingress-nginx-controller-bc57996ff-7k67n
	746a773af9a3d       420193b27261a       5 minutes ago       Exited              patch                                    2                   f9fda60ace412       ingress-nginx-admission-patch-c5w78
	42425ed9b0404       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   94c3559943fe5       csi-hostpathplugin-f28dh
	7d28309808675       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   94c3559943fe5       csi-hostpathplugin-f28dh
	0342de3aee912       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   94c3559943fe5       csi-hostpathplugin-f28dh
	93dd35cb0e36d       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   94c3559943fe5       csi-hostpathplugin-f28dh
	ca7cfe78d707e       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   94c3559943fe5       csi-hostpathplugin-f28dh
	499c4afb4ebc3       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   7d79b4a0d08fa       csi-hostpath-resizer-0
	44f9376eaeca3       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   b4b6e068693df       snapshot-controller-56fcc65765-8ch45
	8237a8a20d1ac       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   94c3559943fe5       csi-hostpathplugin-f28dh
	3fb942ffe6eaa       420193b27261a       5 minutes ago       Exited              create                                   0                   b95a0dd98d604       ingress-nginx-admission-create-p7xtw
	c31aa0d3adaf7       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   9090841319f60       snapshot-controller-56fcc65765-67nb6
	bc792fac37b81       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   5a6eeb7d66115       volcano-scheduler-576bc46687-76ltd
	56ff7ff81b0ca       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   275d565b9f5d6       csi-hostpath-attacher-0
	c2bef53bd66d5       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   6a5dc4d26d444       volcano-controllers-56675bb4d5-lw997
	1c8612f9a5c5a       77bdba588b953       5 minutes ago       Running             yakd                                     0                   388b0b012e544       yakd-dashboard-67d98fc6b-g77xw
	8610a960f83fe       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   beb6b625577b3       metrics-server-8988944d9-7576p
	246e8b3ec7864       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   e4814ab523731       cloud-spanner-emulator-c4bc9b5f8-hvxxd
	109396d74c8e2       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   4b1a8a6a190e3       registry-proxy-2kb7c
	b7ac204163c82       2437cf7621777       5 minutes ago       Running             coredns                                  0                   39c47aad809e1       coredns-6f6b679f8f-jdpwj
	cdac2eacebbc2       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   364ff731f153d       local-path-provisioner-86d989889c-9bxjr
	39ee32b1fa9dd       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ad4dbdc752bcb       nvidia-device-plugin-daemonset-2l8sd
	f89924266f2b8       6fed88f43b276       5 minutes ago       Running             registry                                 0                   f0cd9afec0a4b       registry-6fb4cdfc84-gtgx5
	e10717bbeca4a       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   f3a1bd660ccc9       kube-ingress-dns-minikube
	0dca0c7b099a9       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   db2c0caecf9da       storage-provisioner
	faa042b0b3d5e       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   e334e80f89172       kindnet-vxhfm
	928e1bed9bc12       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   bf3ad87ca50aa       kube-proxy-ctgc7
	8c0a8c64565a1       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   bcd5ea7b2d7d8       kube-controller-manager-addons-789485
	1b13396dc2608       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   3feb9b3d15600       kube-apiserver-addons-789485
	f9a78240a0ab5       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   2983bd1ce9669       kube-scheduler-addons-789485
	9eaa83c2b75f4       27e3830e14027       6 minutes ago       Running             etcd                                     0                   03d5f195701c9       etcd-addons-789485
	
	
	==> containerd <==
	Aug 19 13:00:35 addons-789485 containerd[817]: time="2024-08-19T13:00:35.752639102Z" level=info msg="CreateContainer within sandbox \"b3752430a738a856c13a5c57425a095185b151b4c1e7eb334029754bb1a181a9\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 19 13:00:35 addons-789485 containerd[817]: time="2024-08-19T13:00:35.770676443Z" level=info msg="CreateContainer within sandbox \"b3752430a738a856c13a5c57425a095185b151b4c1e7eb334029754bb1a181a9\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\""
	Aug 19 13:00:35 addons-789485 containerd[817]: time="2024-08-19T13:00:35.771368983Z" level=info msg="StartContainer for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\""
	Aug 19 13:00:35 addons-789485 containerd[817]: time="2024-08-19T13:00:35.823137434Z" level=info msg="StartContainer for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" returns successfully"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.032431627Z" level=error msg="ExecSync for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" failed" error="failed to exec in container: failed to start exec \"b565fdc4d2175e027820d04f2d1c184b0b462caf3bf68af4e95fb74779d93201\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.051040812Z" level=error msg="ExecSync for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" failed" error="failed to exec in container: failed to start exec \"bb57722f32c62558f5f1213ea69e91417acec4d613e857ce6098419afe56b431\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.072690202Z" level=error msg="ExecSync for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" failed" error="failed to exec in container: failed to start exec \"50d6a77a9e7b941b538948b593cc54887f996527e0725f2679d462a45dd33bce\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.073524132Z" level=error msg="ExecSync for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" failed" error="failed to exec in container: failed to start exec \"b0aa158180f6133e691a4a87c206b7a26ca51af0973577de957cc21818d8041d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.086399519Z" level=error msg="ExecSync for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" failed" error="failed to exec in container: failed to start exec \"f552a33dc7b54098c65de5aa13ec0aeec2d58c68085d6d895e615b914eacecd0\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.089755850Z" level=error msg="ExecSync for \"6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518\" failed" error="failed to exec in container: failed to start exec \"725598f04ce453c14811283810bc8cff2fd380da48488b7c57bc4c9c774daa99\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.181976097Z" level=info msg="shim disconnected" id=6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518 namespace=k8s.io
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.182180387Z" level=warning msg="cleaning up after shim disconnected" id=6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518 namespace=k8s.io
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.182205100Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.820998836Z" level=info msg="RemoveContainer for \"bc121411bdb60f8f38ab96f5245444d4bbbf5835d1516075d8e1e194925e7a45\""
	Aug 19 13:00:37 addons-789485 containerd[817]: time="2024-08-19T13:00:37.828685750Z" level=info msg="RemoveContainer for \"bc121411bdb60f8f38ab96f5245444d4bbbf5835d1516075d8e1e194925e7a45\" returns successfully"
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.739403183Z" level=info msg="RemoveContainer for \"fa7ca212005802cfe9462003bf1de2d6b5e04717df00115e0ac41a49f1bed2ed\""
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.746221961Z" level=info msg="RemoveContainer for \"fa7ca212005802cfe9462003bf1de2d6b5e04717df00115e0ac41a49f1bed2ed\" returns successfully"
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.748580433Z" level=info msg="StopPodSandbox for \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\""
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.757246409Z" level=info msg="TearDown network for sandbox \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\" successfully"
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.757292365Z" level=info msg="StopPodSandbox for \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\" returns successfully"
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.758057906Z" level=info msg="RemovePodSandbox for \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\""
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.758109122Z" level=info msg="Forcibly stopping sandbox \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\""
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.766278347Z" level=info msg="TearDown network for sandbox \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\" successfully"
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.773145452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 19 13:00:53 addons-789485 containerd[817]: time="2024-08-19T13:00:53.773419534Z" level=info msg="RemovePodSandbox \"44d1b55868deb4e27a328b847e427c065c3206682cf116acfec61dda62b6c1c3\" returns successfully"
	
	
	==> coredns [b7ac204163c82cf0230e42673f56cec15f16ff2d2aa1ee3fac6e5639a3d80f61] <==
	[INFO] 10.244.0.5:35251 - 44112 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000249696s
	[INFO] 10.244.0.5:40281 - 11201 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002319145s
	[INFO] 10.244.0.5:40281 - 17604 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00235711s
	[INFO] 10.244.0.5:34393 - 35469 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110153s
	[INFO] 10.244.0.5:34393 - 39560 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080459s
	[INFO] 10.244.0.5:39427 - 19244 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104664s
	[INFO] 10.244.0.5:39427 - 7979 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000320104s
	[INFO] 10.244.0.5:40608 - 63134 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007753s
	[INFO] 10.244.0.5:40608 - 60576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052652s
	[INFO] 10.244.0.5:55605 - 21137 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033961s
	[INFO] 10.244.0.5:55605 - 42668 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111736s
	[INFO] 10.244.0.5:34551 - 15842 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001781032s
	[INFO] 10.244.0.5:34551 - 1504 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001811801s
	[INFO] 10.244.0.5:59570 - 54105 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080287s
	[INFO] 10.244.0.5:59570 - 53414 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042445s
	[INFO] 10.244.0.24:32934 - 54984 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001336104s
	[INFO] 10.244.0.24:34026 - 958 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001838829s
	[INFO] 10.244.0.24:49306 - 52770 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149603s
	[INFO] 10.244.0.24:39410 - 34142 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152721s
	[INFO] 10.244.0.24:39254 - 33048 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000175498s
	[INFO] 10.244.0.24:51611 - 65435 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001190506s
	[INFO] 10.244.0.24:58388 - 50477 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002685928s
	[INFO] 10.244.0.24:41517 - 22496 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002514024s
	[INFO] 10.244.0.24:33674 - 27973 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001861122s
	[INFO] 10.244.0.24:37980 - 9288 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006446629s
	
	
	==> describe nodes <==
	Name:               addons-789485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-789485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=addons-789485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_56_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-789485
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-789485"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-789485
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:03:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:59:57 +0000   Mon, 19 Aug 2024 12:56:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:59:57 +0000   Mon, 19 Aug 2024 12:56:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:59:57 +0000   Mon, 19 Aug 2024 12:56:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:59:57 +0000   Mon, 19 Aug 2024 12:56:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-789485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 472ace03691d467886851c6ea5a5a7c9
	  System UUID:                8ba49abf-c795-4c89-81be-9703d758fc2f
	  Boot ID:                    8c9f4b3e-6245-4429-b714-db63b5b637f4
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-hvxxd      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-f7455                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-r6rmq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-7k67n    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m53s
	  kube-system                 coredns-6f6b679f8f-jdpwj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpathplugin-f28dh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-789485                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-vxhfm                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-789485                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-789485       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-ctgc7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-789485                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-8988944d9-7576p              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m55s
	  kube-system                 nvidia-device-plugin-daemonset-2l8sd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-6fb4cdfc84-gtgx5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-2kb7c                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-67nb6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-8ch45        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-9bxjr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  volcano-system              volcano-admission-77d7d48b68-wvwhd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-controllers-56675bb4d5-lw997        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-scheduler-576bc46687-76ltd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-g77xw              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m                     kube-proxy       
	  Normal   NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m14s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m14s (x4 over 6m14s)  kubelet          Node addons-789485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x3 over 6m14s)  kubelet          Node addons-789485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x3 over 6m14s)  kubelet          Node addons-789485 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m14s                  kubelet          Starting kubelet.
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-789485 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-789485 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-789485 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-789485 event: Registered Node addons-789485 in Controller
	
	
	==> dmesg <==
	[Aug19 11:09] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Aug19 12:28] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [9eaa83c2b75f46eedccb385aab2c498e5ee40a4b5439bc64fc82cff695598485] <==
	{"level":"info","ts":"2024-08-19T12:56:47.253064Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T12:56:47.253084Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T12:56:47.252967Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T12:56:47.254312Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T12:56:47.254285Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T12:56:47.537932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T12:56:47.538186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T12:56:47.538293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-19T12:56:47.538431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T12:56:47.538515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T12:56:47.538606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T12:56:47.538687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T12:56:47.540137Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-789485 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T12:56:47.540307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:56:47.540448Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:56:47.541177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T12:56:47.541895Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:56:47.548677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T12:56:47.542542Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T12:56:47.549073Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T12:56:47.542596Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:56:47.549324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:56:47.549455Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T12:56:47.543532Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T12:56:47.551125Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [f080d54401f334daa65ef7676b0aec8eec855fcd2b17825219390e17f7d54056] <==
	2024/08/19 12:59:40 GCP Auth Webhook started!
	2024/08/19 12:59:57 Ready to marshal response ...
	2024/08/19 12:59:57 Ready to write response ...
	2024/08/19 12:59:58 Ready to marshal response ...
	2024/08/19 12:59:58 Ready to write response ...
	
	
	==> kernel <==
	 13:03:00 up 1 day,  2:45,  0 users,  load average: 0.86, 1.53, 2.14
	Linux addons-789485 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [faa042b0b3d5ef8d18cee074cf2b941be071f74d51aea7d4e697d1ccb24ec4e6] <==
	I0819 13:01:42.112882       1 main.go:299] handling current node
	W0819 13:01:51.445185       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:01:51.445230       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 13:01:52.113040       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:01:52.113146       1 main.go:299] handling current node
	I0819 13:02:02.112904       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:02:02.113011       1 main.go:299] handling current node
	I0819 13:02:12.113053       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:02:12.113094       1 main.go:299] handling current node
	W0819 13:02:13.758043       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:02:13.758083       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:02:17.069773       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:02:17.069808       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:02:22.113139       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:02:22.113179       1 main.go:299] handling current node
	I0819 13:02:32.113613       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:02:32.113684       1 main.go:299] handling current node
	I0819 13:02:42.112732       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:02:42.112775       1 main.go:299] handling current node
	W0819 13:02:46.084506       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:02:46.084550       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 13:02:52.113056       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:02:52.113093       1 main.go:299] handling current node
	W0819 13:02:53.469879       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:02:53.469936       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [1b13396dc26082f6e3a622105554d500d29c4655a2d7b2d894f5797d6722bbf5] <==
	W0819 12:58:12.643039       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:13.674804       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:14.468283       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.54.224:443: connect: connection refused
	E0819 12:58:14.468319       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.54.224:443: connect: connection refused" logger="UnhandledError"
	W0819 12:58:14.469888       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:14.514831       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.54.224:443: connect: connection refused
	E0819 12:58:14.514868       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.54.224:443: connect: connection refused" logger="UnhandledError"
	W0819 12:58:14.516960       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:14.715194       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:15.753799       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:16.770347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:17.840966       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:18.894425       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:19.975065       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:21.023002       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:22.079263       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:23.149701       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.95.159:443: connect: connection refused
	W0819 12:58:34.501420       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.54.224:443: connect: connection refused
	E0819 12:58:34.501462       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.54.224:443: connect: connection refused" logger="UnhandledError"
	W0819 12:59:14.478994       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.54.224:443: connect: connection refused
	E0819 12:59:14.479050       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.54.224:443: connect: connection refused" logger="UnhandledError"
	W0819 12:59:14.526891       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.54.224:443: connect: connection refused
	E0819 12:59:14.526956       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.54.224:443: connect: connection refused" logger="UnhandledError"
	I0819 12:59:57.914726       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0819 12:59:57.953535       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [8c0a8c64565a1a4ca57bb568f5b5a134c7200e338f37a40d71eff38098066604] <==
	I0819 12:59:14.508187       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:14.508297       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:14.521864       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:14.539717       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:14.568677       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:14.568834       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:14.605741       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:15.551295       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:15.564056       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:16.668288       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:16.689824       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:17.673009       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:17.681833       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:17.689214       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 12:59:17.697614       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:17.708990       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:17.716022       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 12:59:40.661017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="12.092402ms"
	I0819 12:59:40.661768       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="90.888µs"
	I0819 12:59:47.028189       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 12:59:47.032638       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 12:59:47.076266       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 12:59:47.081406       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 12:59:57.214599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-789485"
	I0819 12:59:57.605041       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [928e1bed9bc12dd0563209bce58c43a5ddc75863aa8813c2afa45867ee386875] <==
	I0819 12:56:59.807982       1 server_linux.go:66] "Using iptables proxy"
	I0819 12:56:59.908900       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 12:56:59.908974       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 12:56:59.949119       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 12:56:59.949197       1 server_linux.go:169] "Using iptables Proxier"
	I0819 12:56:59.952105       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 12:56:59.952958       1 server.go:483] "Version info" version="v1.31.0"
	I0819 12:56:59.952979       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 12:56:59.957348       1 config.go:326] "Starting node config controller"
	I0819 12:56:59.957376       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 12:56:59.959244       1 config.go:197] "Starting service config controller"
	I0819 12:56:59.959259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 12:56:59.959310       1 config.go:104] "Starting endpoint slice config controller"
	I0819 12:56:59.959316       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 12:57:00.057650       1 shared_informer.go:320] Caches are synced for node config
	I0819 12:57:00.062346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 12:57:00.062428       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [f9a78240a0ab5d52b7f5252a9de394440455fe0c5cc48421951126050f0fffad] <==
	W0819 12:56:50.993352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 12:56:50.993434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:50.993529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:56:50.993606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:50.993709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:56:50.993980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:51.840810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:56:51.841256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:51.842356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 12:56:51.842579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:51.853888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:56:51.854173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:51.914043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 12:56:51.914275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:52.025140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 12:56:52.025191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:52.079586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 12:56:52.079837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:52.102941       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 12:56:52.103233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:52.139167       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:56:52.139447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 12:56:52.238072       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:56:52.238327       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 12:56:54.875864       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:00:56 addons-789485 kubelet[1492]: E0819 13:00:56.612337    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:01:08 addons-789485 kubelet[1492]: I0819 13:01:08.611480    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:01:08 addons-789485 kubelet[1492]: E0819 13:01:08.612242    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:01:13 addons-789485 kubelet[1492]: I0819 13:01:13.612761    1492 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-gtgx5" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 13:01:19 addons-789485 kubelet[1492]: I0819 13:01:19.611727    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:01:19 addons-789485 kubelet[1492]: E0819 13:01:19.612427    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:01:31 addons-789485 kubelet[1492]: I0819 13:01:31.612333    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:01:31 addons-789485 kubelet[1492]: E0819 13:01:31.612575    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:01:36 addons-789485 kubelet[1492]: I0819 13:01:36.611154    1492 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2l8sd" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 13:01:37 addons-789485 kubelet[1492]: I0819 13:01:37.613715    1492 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kb7c" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 13:01:46 addons-789485 kubelet[1492]: I0819 13:01:46.611437    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:01:46 addons-789485 kubelet[1492]: E0819 13:01:46.611658    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:01:59 addons-789485 kubelet[1492]: I0819 13:01:59.612196    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:01:59 addons-789485 kubelet[1492]: E0819 13:01:59.612387    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:02:10 addons-789485 kubelet[1492]: I0819 13:02:10.611759    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:02:10 addons-789485 kubelet[1492]: E0819 13:02:10.612000    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:02:22 addons-789485 kubelet[1492]: I0819 13:02:22.612047    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:02:22 addons-789485 kubelet[1492]: E0819 13:02:22.612254    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:02:37 addons-789485 kubelet[1492]: I0819 13:02:37.612619    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:02:37 addons-789485 kubelet[1492]: E0819 13:02:37.613601    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	Aug 19 13:02:39 addons-789485 kubelet[1492]: I0819 13:02:39.612229    1492 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-gtgx5" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 13:02:45 addons-789485 kubelet[1492]: I0819 13:02:45.611333    1492 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2l8sd" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 13:02:47 addons-789485 kubelet[1492]: I0819 13:02:47.611638    1492 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kb7c" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 13:02:50 addons-789485 kubelet[1492]: I0819 13:02:50.611331    1492 scope.go:117] "RemoveContainer" containerID="6a2e16014ca7d7123d55aed0b2136871dc5745c641c9793421e0f55897db0518"
	Aug 19 13:02:50 addons-789485 kubelet[1492]: E0819 13:02:50.611533    1492 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f7455_gadget(fca70167-defc-4dab-b45b-9e0e93156cfd)\"" pod="gadget/gadget-f7455" podUID="fca70167-defc-4dab-b45b-9e0e93156cfd"
	
	
	==> storage-provisioner [0dca0c7b099a9059af651769bf1434f01cf77db58bd452d5b4bfdd4cc0dacce8] <==
	I0819 12:57:04.583631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 12:57:04.608282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 12:57:04.608335       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 12:57:04.642362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 12:57:04.643362       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5773712-0fa6-4367-87ad-5433346e5573", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-789485_012e547a-4745-48b5-a9f4-fe8f1eeebcfe became leader
	I0819 12:57:04.643708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-789485_012e547a-4745-48b5-a9f4-fe8f1eeebcfe!
	I0819 12:57:04.745249       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-789485_012e547a-4745-48b5-a9f4-fe8f1eeebcfe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-789485 -n addons-789485
helpers_test.go:261: (dbg) Run:  kubectl --context addons-789485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-p7xtw ingress-nginx-admission-patch-c5w78 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-789485 describe pod ingress-nginx-admission-create-p7xtw ingress-nginx-admission-patch-c5w78 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-789485 describe pod ingress-nginx-admission-create-p7xtw ingress-nginx-admission-patch-c5w78 test-job-nginx-0: exit status 1 (86.624001ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p7xtw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-c5w78" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-789485 describe pod ingress-nginx-admission-create-p7xtw ingress-nginx-admission-patch-c5w78 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.49s)

                                                
                                    
x
+
TestDockerEnvContainerd (51.35s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-358134 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-358134 --driver=docker  --container-runtime=containerd: (33.587051347s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-358134"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-358134": (1.057696393s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-M00aoca66a0E/agent.4165315" SSH_AGENT_PID="4165316" DOCKER_HOST=ssh://docker@127.0.0.1:38265 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-M00aoca66a0E/agent.4165315" SSH_AGENT_PID="4165316" DOCKER_HOST=ssh://docker@127.0.0.1:38265 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-M00aoca66a0E/agent.4165315" SSH_AGENT_PID="4165316" DOCKER_HOST=ssh://docker@127.0.0.1:38265 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (858.109893ms)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-M00aoca66a0E/agent.4165315" SSH_AGENT_PID="4165316" DOCKER_HOST=ssh://docker@127.0.0.1:38265 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-M00aoca66a0E/agent.4165315" SSH_AGENT_PID="4165316" DOCKER_HOST=ssh://docker@127.0.0.1:38265 docker image ls": (1.007677893s)
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:626: *** TestDockerEnvContainerd FAILED at 2024-08-19 13:05:44.339611714 +0000 UTC m=+594.698563948
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-358134
helpers_test.go:235: (dbg) docker inspect dockerenv-358134:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "280642630cb86151def9076f1621b57f1a619f42777506bf93f278313d364525",
	        "Created": "2024-08-19T13:05:03.236788007Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4162962,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T13:05:03.35926799Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/280642630cb86151def9076f1621b57f1a619f42777506bf93f278313d364525/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/280642630cb86151def9076f1621b57f1a619f42777506bf93f278313d364525/hostname",
	        "HostsPath": "/var/lib/docker/containers/280642630cb86151def9076f1621b57f1a619f42777506bf93f278313d364525/hosts",
	        "LogPath": "/var/lib/docker/containers/280642630cb86151def9076f1621b57f1a619f42777506bf93f278313d364525/280642630cb86151def9076f1621b57f1a619f42777506bf93f278313d364525-json.log",
	        "Name": "/dockerenv-358134",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-358134:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "dockerenv-358134",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a30db6783c20bb1ea3382c15e90fabfdab3c8f58cef37677ea653f009ed08a39-init/diff:/var/lib/docker/overlay2/f9730c920ad297aa3b42f5a0ebbe1c9311721ca848f3268205322d3e26bf32e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a30db6783c20bb1ea3382c15e90fabfdab3c8f58cef37677ea653f009ed08a39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a30db6783c20bb1ea3382c15e90fabfdab3c8f58cef37677ea653f009ed08a39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a30db6783c20bb1ea3382c15e90fabfdab3c8f58cef37677ea653f009ed08a39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-358134",
	                "Source": "/var/lib/docker/volumes/dockerenv-358134/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-358134",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-358134",
	                "name.minikube.sigs.k8s.io": "dockerenv-358134",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a30aa025a27500c6c965cf596a944ea0321e54f3c49f8b4a06f186b899df8874",
	            "SandboxKey": "/var/run/docker/netns/a30aa025a275",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38265"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38266"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38269"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38267"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38268"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-358134": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c476dad9001ed4abff1d004e7ac59a407fea725728cfdc13d68e6c959f29ca8a",
	                    "EndpointID": "d8bac04f600c04558709cf86055a7bda1a4e51f76f1fa70b6e7d224ed5b6219a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-358134",
	                        "280642630cb8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-358134 -n dockerenv-358134
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p dockerenv-358134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-358134 logs -n 25: (1.40105222s)
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	|------------|---------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	|  Command   |                                            Args                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|---------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | gcp-auth --alsologtostderr                                                                  |                  |         |         |                     |                     |
	|            | -v=1                                                                                        |                  |         |         |                     |                     |
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | yakd --alsologtostderr -v=1                                                                 |                  |         |         |                     |                     |
	| ip         | addons-789485 ip                                                                            | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | registry --alsologtostderr                                                                  |                  |         |         |                     |                     |
	|            | -v=1                                                                                        |                  |         |         |                     |                     |
	| addons     | disable nvidia-device-plugin                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | -p addons-789485                                                                            |                  |         |         |                     |                     |
	| ssh        | addons-789485 ssh cat                                                                       | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | /opt/local-path-provisioner/pvc-503ce065-8e4f-4367-8f25-861351c7bcf5_default_test-pvc/file1 |                  |         |         |                     |                     |
	| addons     | disable cloud-spanner -p                                                                    | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | addons-789485                                                                               |                  |         |         |                     |                     |
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | storage-provisioner-rancher                                                                 |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                      |                  |         |         |                     |                     |
	| addons     | enable headlamp                                                                             | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:03 UTC |
	|            | -p addons-789485                                                                            |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                      |                  |         |         |                     |                     |
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:03 UTC | 19 Aug 24 13:04 UTC |
	|            | headlamp --alsologtostderr                                                                  |                  |         |         |                     |                     |
	|            | -v=1                                                                                        |                  |         |         |                     |                     |
	| addons     | addons-789485 addons                                                                        | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | disable metrics-server                                                                      |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                      |                  |         |         |                     |                     |
	| addons     | disable inspektor-gadget -p                                                                 | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | addons-789485                                                                               |                  |         |         |                     |                     |
	| addons     | addons-789485 addons                                                                        | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | disable csi-hostpath-driver                                                                 |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                      |                  |         |         |                     |                     |
	| ssh        | addons-789485 ssh curl -s                                                                   | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | http://127.0.0.1/ -H 'Host:                                                                 |                  |         |         |                     |                     |
	|            | nginx.example.com'                                                                          |                  |         |         |                     |                     |
	| addons     | addons-789485 addons                                                                        | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | disable volumesnapshots                                                                     |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                      |                  |         |         |                     |                     |
	| ip         | addons-789485 ip                                                                            | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | ingress-dns --alsologtostderr                                                               |                  |         |         |                     |                     |
	|            | -v=1                                                                                        |                  |         |         |                     |                     |
	| addons     | addons-789485 addons disable                                                                | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | ingress --alsologtostderr -v=1                                                              |                  |         |         |                     |                     |
	| stop       | -p addons-789485                                                                            | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	| addons     | enable dashboard -p                                                                         | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | addons-789485                                                                               |                  |         |         |                     |                     |
	| addons     | disable dashboard -p                                                                        | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | addons-789485                                                                               |                  |         |         |                     |                     |
	| addons     | disable gvisor -p                                                                           | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	|            | addons-789485                                                                               |                  |         |         |                     |                     |
	| delete     | -p addons-789485                                                                            | addons-789485    | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:04 UTC |
	| start      | -p dockerenv-358134                                                                         | dockerenv-358134 | jenkins | v1.33.1 | 19 Aug 24 13:04 UTC | 19 Aug 24 13:05 UTC |
	|            | --driver=docker                                                                             |                  |         |         |                     |                     |
	|            | --container-runtime=containerd                                                              |                  |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p                                                                     | dockerenv-358134 | jenkins | v1.33.1 | 19 Aug 24 13:05 UTC | 19 Aug 24 13:05 UTC |
	|            | dockerenv-358134                                                                            |                  |         |         |                     |                     |
	|------------|---------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:04:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:04:57.337843 4162469 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:04:57.337971 4162469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:04:57.337976 4162469 out.go:358] Setting ErrFile to fd 2...
	I0819 13:04:57.337979 4162469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:04:57.338219 4162469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:04:57.338615 4162469 out.go:352] Setting JSON to false
	I0819 13:04:57.342875 4162469 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":96441,"bootTime":1723976256,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:04:57.342952 4162469 start.go:139] virtualization:  
	I0819 13:04:57.345306 4162469 out.go:177] * [dockerenv-358134] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 13:04:57.346628 4162469 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:04:57.346785 4162469 notify.go:220] Checking for updates...
	I0819 13:04:57.349371 4162469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:04:57.351107 4162469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:04:57.352592 4162469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:04:57.353767 4162469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:04:57.355294 4162469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:04:57.356862 4162469 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:04:57.378172 4162469 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:04:57.378299 4162469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:04:57.457253 4162469 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-19 13:04:57.448318309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:04:57.457350 4162469 docker.go:307] overlay module found
	I0819 13:04:57.458907 4162469 out.go:177] * Using the docker driver based on user configuration
	I0819 13:04:57.460215 4162469 start.go:297] selected driver: docker
	I0819 13:04:57.460222 4162469 start.go:901] validating driver "docker" against <nil>
	I0819 13:04:57.460233 4162469 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:04:57.460348 4162469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:04:57.515933 4162469 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-19 13:04:57.507032867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:04:57.516104 4162469 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 13:04:57.516396 4162469 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 13:04:57.516537 4162469 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 13:04:57.517937 4162469 out.go:177] * Using Docker driver with root privileges
	I0819 13:04:57.519557 4162469 cni.go:84] Creating CNI manager for ""
	I0819 13:04:57.519570 4162469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:04:57.519578 4162469 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 13:04:57.519658 4162469 start.go:340] cluster config:
	{Name:dockerenv-358134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:dockerenv-358134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:04:57.521199 4162469 out.go:177] * Starting "dockerenv-358134" primary control-plane node in "dockerenv-358134" cluster
	I0819 13:04:57.522489 4162469 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 13:04:57.524113 4162469 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 13:04:57.525687 4162469 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 13:04:57.525730 4162469 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 13:04:57.525736 4162469 cache.go:56] Caching tarball of preloaded images
	I0819 13:04:57.525817 4162469 preload.go:172] Found /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 13:04:57.525825 4162469 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 13:04:57.526152 4162469 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/config.json ...
	I0819 13:04:57.526171 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/config.json: {Name:mkb76bb6765feda21f837565bf0d157270b9df45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:04:57.526356 4162469 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	W0819 13:04:57.546248 4162469 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 13:04:57.546260 4162469 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 13:04:57.546337 4162469 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 13:04:57.546353 4162469 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 13:04:57.546357 4162469 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 13:04:57.546365 4162469 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 13:04:57.546370 4162469 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 13:04:57.673945 4162469 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 13:04:57.673989 4162469 cache.go:194] Successfully downloaded all kic artifacts
	I0819 13:04:57.674031 4162469 start.go:360] acquireMachinesLock for dockerenv-358134: {Name:mk9600d50e27a11b60571aeeb95b243711393bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:04:57.674157 4162469 start.go:364] duration metric: took 108.562µs to acquireMachinesLock for "dockerenv-358134"
	I0819 13:04:57.674182 4162469 start.go:93] Provisioning new machine with config: &{Name:dockerenv-358134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:dockerenv-358134 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 13:04:57.674270 4162469 start.go:125] createHost starting for "" (driver="docker")
	I0819 13:04:57.676145 4162469 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0819 13:04:57.676391 4162469 start.go:159] libmachine.API.Create for "dockerenv-358134" (driver="docker")
	I0819 13:04:57.676419 4162469 client.go:168] LocalClient.Create starting
	I0819 13:04:57.676484 4162469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem
	I0819 13:04:57.676514 4162469 main.go:141] libmachine: Decoding PEM data...
	I0819 13:04:57.676526 4162469 main.go:141] libmachine: Parsing certificate...
	I0819 13:04:57.676578 4162469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem
	I0819 13:04:57.676592 4162469 main.go:141] libmachine: Decoding PEM data...
	I0819 13:04:57.676601 4162469 main.go:141] libmachine: Parsing certificate...
	I0819 13:04:57.677000 4162469 cli_runner.go:164] Run: docker network inspect dockerenv-358134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 13:04:57.692916 4162469 cli_runner.go:211] docker network inspect dockerenv-358134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 13:04:57.692997 4162469 network_create.go:284] running [docker network inspect dockerenv-358134] to gather additional debugging logs...
	I0819 13:04:57.693013 4162469 cli_runner.go:164] Run: docker network inspect dockerenv-358134
	W0819 13:04:57.707052 4162469 cli_runner.go:211] docker network inspect dockerenv-358134 returned with exit code 1
	I0819 13:04:57.707073 4162469 network_create.go:287] error running [docker network inspect dockerenv-358134]: docker network inspect dockerenv-358134: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-358134 not found
	I0819 13:04:57.707085 4162469 network_create.go:289] output of [docker network inspect dockerenv-358134]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-358134 not found
	
	** /stderr **
	I0819 13:04:57.707249 4162469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 13:04:57.724181 4162469 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018163f0}
	I0819 13:04:57.724219 4162469 network_create.go:124] attempt to create docker network dockerenv-358134 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 13:04:57.724279 4162469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-358134 dockerenv-358134
	I0819 13:04:57.792367 4162469 network_create.go:108] docker network dockerenv-358134 192.168.49.0/24 created
	I0819 13:04:57.792390 4162469 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-358134" container
	I0819 13:04:57.792473 4162469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 13:04:57.806066 4162469 cli_runner.go:164] Run: docker volume create dockerenv-358134 --label name.minikube.sigs.k8s.io=dockerenv-358134 --label created_by.minikube.sigs.k8s.io=true
	I0819 13:04:57.822389 4162469 oci.go:103] Successfully created a docker volume dockerenv-358134
	I0819 13:04:57.822481 4162469 cli_runner.go:164] Run: docker run --rm --name dockerenv-358134-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-358134 --entrypoint /usr/bin/test -v dockerenv-358134:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 13:04:58.505648 4162469 oci.go:107] Successfully prepared a docker volume dockerenv-358134
	I0819 13:04:58.505696 4162469 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 13:04:58.505715 4162469 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 13:04:58.505794 4162469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-358134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 13:05:03.166843 4162469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-358134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.661014164s)
	I0819 13:05:03.166874 4162469 kic.go:203] duration metric: took 4.661156416s to extract preloaded images to volume ...
	W0819 13:05:03.167213 4162469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 13:05:03.167315 4162469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 13:05:03.220637 4162469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-358134 --name dockerenv-358134 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-358134 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-358134 --network dockerenv-358134 --ip 192.168.49.2 --volume dockerenv-358134:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 13:05:03.524606 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Running}}
	I0819 13:05:03.544198 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Status}}
	I0819 13:05:03.567198 4162469 cli_runner.go:164] Run: docker exec dockerenv-358134 stat /var/lib/dpkg/alternatives/iptables
	I0819 13:05:03.624552 4162469 oci.go:144] the created container "dockerenv-358134" has a running status.
	I0819 13:05:03.624572 4162469 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa...
	I0819 13:05:03.909062 4162469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 13:05:03.935022 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Status}}
	I0819 13:05:03.959002 4162469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 13:05:03.959013 4162469 kic_runner.go:114] Args: [docker exec --privileged dockerenv-358134 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 13:05:04.073683 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Status}}
	I0819 13:05:04.097604 4162469 machine.go:93] provisionDockerMachine start ...
	I0819 13:05:04.097694 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:04.125367 4162469 main.go:141] libmachine: Using SSH client type: native
	I0819 13:05:04.125655 4162469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38265 <nil> <nil>}
	I0819 13:05:04.125662 4162469 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:05:04.126281 4162469 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52676->127.0.0.1:38265: read: connection reset by peer
	I0819 13:05:07.259571 4162469 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-358134
	
	I0819 13:05:07.259592 4162469 ubuntu.go:169] provisioning hostname "dockerenv-358134"
	I0819 13:05:07.259664 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:07.277766 4162469 main.go:141] libmachine: Using SSH client type: native
	I0819 13:05:07.277993 4162469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38265 <nil> <nil>}
	I0819 13:05:07.278001 4162469 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-358134 && echo "dockerenv-358134" | sudo tee /etc/hostname
	I0819 13:05:07.420043 4162469 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-358134
	
	I0819 13:05:07.420140 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:07.437443 4162469 main.go:141] libmachine: Using SSH client type: native
	I0819 13:05:07.437697 4162469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38265 <nil> <nil>}
	I0819 13:05:07.437711 4162469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-358134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-358134/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-358134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:05:07.576022 4162469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:05:07.576039 4162469 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19479-4141166/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-4141166/.minikube}
	I0819 13:05:07.576056 4162469 ubuntu.go:177] setting up certificates
	I0819 13:05:07.576072 4162469 provision.go:84] configureAuth start
	I0819 13:05:07.576136 4162469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-358134
	I0819 13:05:07.593300 4162469 provision.go:143] copyHostCerts
	I0819 13:05:07.593360 4162469 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem, removing ...
	I0819 13:05:07.593367 4162469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem
	I0819 13:05:07.593442 4162469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem (1082 bytes)
	I0819 13:05:07.593540 4162469 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem, removing ...
	I0819 13:05:07.593544 4162469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem
	I0819 13:05:07.593568 4162469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem (1123 bytes)
	I0819 13:05:07.593626 4162469 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem, removing ...
	I0819 13:05:07.593629 4162469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem
	I0819 13:05:07.593651 4162469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem (1675 bytes)
	I0819 13:05:07.593702 4162469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem org=jenkins.dockerenv-358134 san=[127.0.0.1 192.168.49.2 dockerenv-358134 localhost minikube]
	I0819 13:05:08.246289 4162469 provision.go:177] copyRemoteCerts
	I0819 13:05:08.246350 4162469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:05:08.246391 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:08.263429 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:08.356760 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 13:05:08.381827 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 13:05:08.407486 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 13:05:08.431614 4162469 provision.go:87] duration metric: took 855.529807ms to configureAuth
	I0819 13:05:08.431632 4162469 ubuntu.go:193] setting minikube options for container-runtime
	I0819 13:05:08.431904 4162469 config.go:182] Loaded profile config "dockerenv-358134": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:05:08.431910 4162469 machine.go:96] duration metric: took 4.334296053s to provisionDockerMachine
	I0819 13:05:08.431916 4162469 client.go:171] duration metric: took 10.755492501s to LocalClient.Create
	I0819 13:05:08.431941 4162469 start.go:167] duration metric: took 10.755550371s to libmachine.API.Create "dockerenv-358134"
	I0819 13:05:08.431948 4162469 start.go:293] postStartSetup for "dockerenv-358134" (driver="docker")
	I0819 13:05:08.431956 4162469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:05:08.432009 4162469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:05:08.432054 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:08.448882 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:08.545096 4162469 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:05:08.548330 4162469 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 13:05:08.548354 4162469 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 13:05:08.548362 4162469 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 13:05:08.548368 4162469 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 13:05:08.548378 4162469 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/addons for local assets ...
	I0819 13:05:08.548440 4162469 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/files for local assets ...
	I0819 13:05:08.548940 4162469 start.go:296] duration metric: took 116.979022ms for postStartSetup
	I0819 13:05:08.549274 4162469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-358134
	I0819 13:05:08.565712 4162469 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/config.json ...
	I0819 13:05:08.565988 4162469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:05:08.566040 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:08.582874 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:08.672901 4162469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 13:05:08.677587 4162469 start.go:128] duration metric: took 11.003300931s to createHost
	I0819 13:05:08.677603 4162469 start.go:83] releasing machines lock for "dockerenv-358134", held for 11.00343849s
	I0819 13:05:08.677681 4162469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-358134
	I0819 13:05:08.693630 4162469 ssh_runner.go:195] Run: cat /version.json
	I0819 13:05:08.693643 4162469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:05:08.693675 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:08.693729 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:08.711565 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:08.725050 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:08.930150 4162469 ssh_runner.go:195] Run: systemctl --version
	I0819 13:05:08.934654 4162469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 13:05:08.939086 4162469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 13:05:08.966502 4162469 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 13:05:08.966573 4162469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:05:08.995548 4162469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 13:05:08.995571 4162469 start.go:495] detecting cgroup driver to use...
	I0819 13:05:08.995607 4162469 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 13:05:08.995668 4162469 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 13:05:09.013018 4162469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 13:05:09.026438 4162469 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:05:09.026505 4162469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:05:09.042187 4162469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:05:09.058393 4162469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:05:09.156222 4162469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:05:09.249741 4162469 docker.go:233] disabling docker service ...
	I0819 13:05:09.249801 4162469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:05:09.271741 4162469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:05:09.284552 4162469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:05:09.389668 4162469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:05:09.477175 4162469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:05:09.488552 4162469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:05:09.505692 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 13:05:09.516254 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 13:05:09.526813 4162469 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 13:05:09.526876 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 13:05:09.537416 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 13:05:09.548136 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 13:05:09.558967 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 13:05:09.569252 4162469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:05:09.579175 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 13:05:09.590786 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 13:05:09.601365 4162469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 13:05:09.611761 4162469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:05:09.620761 4162469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:05:09.629277 4162469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:05:09.713269 4162469 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 13:05:09.853378 4162469 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 13:05:09.853441 4162469 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 13:05:09.857256 4162469 start.go:563] Will wait 60s for crictl version
	I0819 13:05:09.857312 4162469 ssh_runner.go:195] Run: which crictl
	I0819 13:05:09.860835 4162469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:05:09.899371 4162469 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 13:05:09.899437 4162469 ssh_runner.go:195] Run: containerd --version
	I0819 13:05:09.925498 4162469 ssh_runner.go:195] Run: containerd --version
	I0819 13:05:09.950751 4162469 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 13:05:09.952112 4162469 cli_runner.go:164] Run: docker network inspect dockerenv-358134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 13:05:09.967833 4162469 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 13:05:09.971402 4162469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:05:09.982407 4162469 kubeadm.go:883] updating cluster {Name:dockerenv-358134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:dockerenv-358134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:05:09.982522 4162469 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 13:05:09.982582 4162469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:05:10.038432 4162469 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 13:05:10.038445 4162469 containerd.go:534] Images already preloaded, skipping extraction
	I0819 13:05:10.038511 4162469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:05:10.077675 4162469 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 13:05:10.077689 4162469 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:05:10.077698 4162469 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0819 13:05:10.077816 4162469 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-358134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:dockerenv-358134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:05:10.077901 4162469 ssh_runner.go:195] Run: sudo crictl info
	I0819 13:05:10.119021 4162469 cni.go:84] Creating CNI manager for ""
	I0819 13:05:10.119031 4162469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:05:10.119039 4162469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:05:10.119060 4162469 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-358134 NodeName:dockerenv-358134 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:05:10.119200 4162469 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-358134"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:05:10.119269 4162469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:05:10.130504 4162469 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:05:10.130571 4162469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:05:10.140095 4162469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0819 13:05:10.159482 4162469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:05:10.179246 4162469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0819 13:05:10.199437 4162469 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 13:05:10.203189 4162469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:05:10.214532 4162469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:05:10.297789 4162469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:05:10.314549 4162469 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134 for IP: 192.168.49.2
	I0819 13:05:10.314568 4162469 certs.go:194] generating shared ca certs ...
	I0819 13:05:10.314586 4162469 certs.go:226] acquiring lock for ca certs: {Name:mkb3362db9c120e28de14409a94f066387768cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:10.314733 4162469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key
	I0819 13:05:10.314772 4162469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key
	I0819 13:05:10.314778 4162469 certs.go:256] generating profile certs ...
	I0819 13:05:10.314840 4162469 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/client.key
	I0819 13:05:10.314850 4162469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/client.crt with IP's: []
	I0819 13:05:11.250553 4162469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/client.crt ...
	I0819 13:05:11.250569 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/client.crt: {Name:mk16c0f63b6a6383e083cc27d7ffc74193bec14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:11.250776 4162469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/client.key ...
	I0819 13:05:11.250783 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/client.key: {Name:mkbd0afc55ae22ad00327ce903053e4f620d1b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:11.251301 4162469 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.key.ef28e64b
	I0819 13:05:11.251315 4162469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.crt.ef28e64b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 13:05:11.841491 4162469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.crt.ef28e64b ...
	I0819 13:05:11.841508 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.crt.ef28e64b: {Name:mk0802c661bdc82428b99614cec952530c5bf4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:11.841710 4162469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.key.ef28e64b ...
	I0819 13:05:11.841719 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.key.ef28e64b: {Name:mka01ce7700e52d7fefe708ec8247984715aabf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:11.842320 4162469 certs.go:381] copying /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.crt.ef28e64b -> /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.crt
	I0819 13:05:11.842415 4162469 certs.go:385] copying /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.key.ef28e64b -> /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.key
	I0819 13:05:11.842494 4162469 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.key
	I0819 13:05:11.842510 4162469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.crt with IP's: []
	I0819 13:05:12.385926 4162469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.crt ...
	I0819 13:05:12.385947 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.crt: {Name:mka363fafaeeb76a7ea9ffa32a5bae4433e98831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:12.386158 4162469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.key ...
	I0819 13:05:12.386167 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.key: {Name:mka46fc22b43ae95517d621c46fde432c81cba5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:12.386755 4162469 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:05:12.386799 4162469 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem (1082 bytes)
	I0819 13:05:12.386828 4162469 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:05:12.386852 4162469 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem (1675 bytes)
	I0819 13:05:12.387456 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:05:12.415708 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:05:12.441914 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:05:12.468760 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 13:05:12.494713 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:05:12.523685 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 13:05:12.551663 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:05:12.577531 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/dockerenv-358134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:05:12.606203 4162469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:05:12.634464 4162469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:05:12.654155 4162469 ssh_runner.go:195] Run: openssl version
	I0819 13:05:12.664052 4162469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:05:12.675319 4162469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:05:12.679047 4162469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:05:12.679105 4162469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:05:12.687267 4162469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:05:12.697510 4162469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:05:12.701020 4162469 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 13:05:12.701060 4162469 kubeadm.go:392] StartCluster: {Name:dockerenv-358134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:dockerenv-358134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:05:12.701129 4162469 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 13:05:12.701200 4162469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:05:12.738995 4162469 cri.go:89] found id: ""
	I0819 13:05:12.739061 4162469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:05:12.748129 4162469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 13:05:12.757214 4162469 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 13:05:12.757272 4162469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 13:05:12.766492 4162469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 13:05:12.766502 4162469 kubeadm.go:157] found existing configuration files:
	
	I0819 13:05:12.766554 4162469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 13:05:12.776075 4162469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 13:05:12.776129 4162469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 13:05:12.785104 4162469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 13:05:12.794305 4162469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 13:05:12.794362 4162469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 13:05:12.803091 4162469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 13:05:12.812439 4162469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 13:05:12.812497 4162469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 13:05:12.821418 4162469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 13:05:12.830577 4162469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 13:05:12.830645 4162469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 13:05:12.839582 4162469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 13:05:12.883833 4162469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 13:05:12.884148 4162469 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 13:05:12.904405 4162469 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 13:05:12.904467 4162469 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 13:05:12.904500 4162469 kubeadm.go:310] OS: Linux
	I0819 13:05:12.904543 4162469 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 13:05:12.904589 4162469 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 13:05:12.904633 4162469 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 13:05:12.904678 4162469 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 13:05:12.904724 4162469 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 13:05:12.904769 4162469 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 13:05:12.904812 4162469 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 13:05:12.904857 4162469 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 13:05:12.904900 4162469 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 13:05:12.971002 4162469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 13:05:12.971102 4162469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 13:05:12.971189 4162469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 13:05:12.980220 4162469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 13:05:12.982670 4162469 out.go:235]   - Generating certificates and keys ...
	I0819 13:05:12.982785 4162469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 13:05:12.982865 4162469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 13:05:13.916074 4162469 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 13:05:14.185657 4162469 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 13:05:14.730155 4162469 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 13:05:15.098662 4162469 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 13:05:15.338403 4162469 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 13:05:15.338733 4162469 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [dockerenv-358134 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 13:05:15.764083 4162469 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 13:05:15.764442 4162469 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-358134 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 13:05:16.147944 4162469 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 13:05:16.793931 4162469 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 13:05:16.984382 4162469 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 13:05:16.984620 4162469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 13:05:17.425427 4162469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 13:05:18.203625 4162469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 13:05:19.006220 4162469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 13:05:19.496274 4162469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 13:05:19.830955 4162469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 13:05:19.831766 4162469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 13:05:19.834879 4162469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 13:05:19.836836 4162469 out.go:235]   - Booting up control plane ...
	I0819 13:05:19.836931 4162469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 13:05:19.837005 4162469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 13:05:19.837822 4162469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 13:05:19.849947 4162469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 13:05:19.857296 4162469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 13:05:19.857347 4162469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 13:05:19.972210 4162469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 13:05:19.972318 4162469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 13:05:21.969265 4162469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001539524s
	I0819 13:05:21.969343 4162469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 13:05:27.471063 4162469 kubeadm.go:310] [api-check] The API server is healthy after 5.50208623s
	I0819 13:05:27.490495 4162469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 13:05:27.504747 4162469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 13:05:27.526133 4162469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 13:05:27.526454 4162469 kubeadm.go:310] [mark-control-plane] Marking the node dockerenv-358134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 13:05:27.538847 4162469 kubeadm.go:310] [bootstrap-token] Using token: a28i1o.6my6vmlvyb9v6dyl
	I0819 13:05:27.540816 4162469 out.go:235]   - Configuring RBAC rules ...
	I0819 13:05:27.540944 4162469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 13:05:27.547675 4162469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 13:05:27.559605 4162469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 13:05:27.564631 4162469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 13:05:27.570125 4162469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 13:05:27.579249 4162469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 13:05:27.878073 4162469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 13:05:28.305169 4162469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 13:05:28.880783 4162469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 13:05:28.881924 4162469 kubeadm.go:310] 
	I0819 13:05:28.881988 4162469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 13:05:28.881993 4162469 kubeadm.go:310] 
	I0819 13:05:28.882067 4162469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 13:05:28.882070 4162469 kubeadm.go:310] 
	I0819 13:05:28.882094 4162469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 13:05:28.882149 4162469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 13:05:28.882197 4162469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 13:05:28.882200 4162469 kubeadm.go:310] 
	I0819 13:05:28.882251 4162469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 13:05:28.882255 4162469 kubeadm.go:310] 
	I0819 13:05:28.882310 4162469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 13:05:28.882314 4162469 kubeadm.go:310] 
	I0819 13:05:28.882364 4162469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 13:05:28.882434 4162469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 13:05:28.882499 4162469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 13:05:28.882503 4162469 kubeadm.go:310] 
	I0819 13:05:28.882583 4162469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 13:05:28.882656 4162469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 13:05:28.882659 4162469 kubeadm.go:310] 
	I0819 13:05:28.882746 4162469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a28i1o.6my6vmlvyb9v6dyl \
	I0819 13:05:28.882845 4162469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:526be1a16141ea4231f47bdfd207f2f21320af5d9aae23337e8717d344429352 \
	I0819 13:05:28.882864 4162469 kubeadm.go:310] 	--control-plane 
	I0819 13:05:28.882867 4162469 kubeadm.go:310] 
	I0819 13:05:28.882947 4162469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 13:05:28.882950 4162469 kubeadm.go:310] 
	I0819 13:05:28.883029 4162469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a28i1o.6my6vmlvyb9v6dyl \
	I0819 13:05:28.883126 4162469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:526be1a16141ea4231f47bdfd207f2f21320af5d9aae23337e8717d344429352 
	I0819 13:05:28.888045 4162469 kubeadm.go:310] W0819 13:05:12.880193    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:05:28.888351 4162469 kubeadm.go:310] W0819 13:05:12.881255    1017 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 13:05:28.888571 4162469 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 13:05:28.888678 4162469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 13:05:28.888712 4162469 cni.go:84] Creating CNI manager for ""
	I0819 13:05:28.888719 4162469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:05:28.893158 4162469 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 13:05:28.895760 4162469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 13:05:28.899903 4162469 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 13:05:28.899917 4162469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 13:05:28.923323 4162469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 13:05:29.221555 4162469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 13:05:29.221702 4162469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 13:05:29.221808 4162469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-358134 minikube.k8s.io/updated_at=2024_08_19T13_05_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=dockerenv-358134 minikube.k8s.io/primary=true
	I0819 13:05:29.372971 4162469 ops.go:34] apiserver oom_adj: -16
	I0819 13:05:29.372989 4162469 kubeadm.go:1113] duration metric: took 151.345617ms to wait for elevateKubeSystemPrivileges
	I0819 13:05:29.373000 4162469 kubeadm.go:394] duration metric: took 16.671947388s to StartCluster
	I0819 13:05:29.373017 4162469 settings.go:142] acquiring lock: {Name:mkaa4019b166703efd95aaa3737397f414197f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:29.373077 4162469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:05:29.373763 4162469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/kubeconfig: {Name:mk7b0eea2060f71726f692d0256a33fdf7565e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:05:29.373995 4162469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 13:05:29.374073 4162469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 13:05:29.374330 4162469 config.go:182] Loaded profile config "dockerenv-358134": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:05:29.374363 4162469 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:05:29.374421 4162469 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-358134"
	I0819 13:05:29.374443 4162469 addons.go:234] Setting addon storage-provisioner=true in "dockerenv-358134"
	I0819 13:05:29.374466 4162469 host.go:66] Checking if "dockerenv-358134" exists ...
	I0819 13:05:29.374943 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Status}}
	I0819 13:05:29.375117 4162469 addons.go:69] Setting default-storageclass=true in profile "dockerenv-358134"
	I0819 13:05:29.375136 4162469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-358134"
	I0819 13:05:29.375367 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Status}}
	I0819 13:05:29.376896 4162469 out.go:177] * Verifying Kubernetes components...
	I0819 13:05:29.380439 4162469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:05:29.417038 4162469 addons.go:234] Setting addon default-storageclass=true in "dockerenv-358134"
	I0819 13:05:29.417067 4162469 host.go:66] Checking if "dockerenv-358134" exists ...
	I0819 13:05:29.417512 4162469 cli_runner.go:164] Run: docker container inspect dockerenv-358134 --format={{.State.Status}}
	I0819 13:05:29.432893 4162469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:05:29.435952 4162469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:05:29.435964 4162469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:05:29.436033 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:29.447949 4162469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:05:29.447970 4162469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:05:29.448038 4162469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-358134
	I0819 13:05:29.475409 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:29.491886 4162469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38265 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/dockerenv-358134/id_rsa Username:docker}
	I0819 13:05:29.656016 4162469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:05:29.689978 4162469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 13:05:29.690081 4162469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:05:29.705928 4162469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:05:30.284337 4162469 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 13:05:30.286301 4162469 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:05:30.286371 4162469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:05:30.323958 4162469 api_server.go:72] duration metric: took 949.935231ms to wait for apiserver process to appear ...
	I0819 13:05:30.323973 4162469 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:05:30.324009 4162469 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 13:05:30.329216 4162469 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 13:05:30.331888 4162469 addons.go:510] duration metric: took 957.512181ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 13:05:30.334958 4162469 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 13:05:30.336589 4162469 api_server.go:141] control plane version: v1.31.0
	I0819 13:05:30.336608 4162469 api_server.go:131] duration metric: took 12.613247ms to wait for apiserver health ...
	I0819 13:05:30.336615 4162469 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:05:30.343644 4162469 system_pods.go:59] 5 kube-system pods found
	I0819 13:05:30.343667 4162469 system_pods.go:61] "etcd-dockerenv-358134" [524ad50a-1e3b-415d-8d5e-d33355b900a7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 13:05:30.343674 4162469 system_pods.go:61] "kube-apiserver-dockerenv-358134" [218e3b63-0b32-40f3-8dcf-1480f303d964] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 13:05:30.343684 4162469 system_pods.go:61] "kube-controller-manager-dockerenv-358134" [8bda3b22-cfcb-4125-8a63-70587db21e2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 13:05:30.343690 4162469 system_pods.go:61] "kube-scheduler-dockerenv-358134" [31e7a6ae-4c50-4d30-988c-31952815b86c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 13:05:30.343694 4162469 system_pods.go:61] "storage-provisioner" [fe9f4f55-bc94-4e7e-904c-bfe9e08c0d0a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0819 13:05:30.343700 4162469 system_pods.go:74] duration metric: took 7.079017ms to wait for pod list to return data ...
	I0819 13:05:30.343711 4162469 kubeadm.go:582] duration metric: took 969.694973ms to wait for: map[apiserver:true system_pods:true]
	I0819 13:05:30.343723 4162469 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:05:30.347493 4162469 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 13:05:30.347513 4162469 node_conditions.go:123] node cpu capacity is 2
	I0819 13:05:30.347523 4162469 node_conditions.go:105] duration metric: took 3.796319ms to run NodePressure ...
	I0819 13:05:30.347534 4162469 start.go:241] waiting for startup goroutines ...
	I0819 13:05:30.788897 4162469 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-358134" context rescaled to 1 replicas
	I0819 13:05:30.788922 4162469 start.go:246] waiting for cluster config update ...
	I0819 13:05:30.788932 4162469 start.go:255] writing updated cluster config ...
	I0819 13:05:30.789236 4162469 ssh_runner.go:195] Run: rm -f paused
	I0819 13:05:30.859656 4162469 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:05:30.862895 4162469 out.go:177] * Done! kubectl is now configured to use "dockerenv-358134" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5449e5ffe1078       6a23fa8fd2b78       10 seconds ago      Running             kindnet-cni               0                   4b0d420ecc120       kindnet-wjc7z
	003d05ca351bc       71d55d66fd4ee       11 seconds ago      Running             kube-proxy                0                   b058a0ebda7d2       kube-proxy-bqmbg
	136d326d81414       ba04bb24b9575       12 seconds ago      Running             storage-provisioner       0                   76a58585febad       storage-provisioner
	b7e1027fc5808       fbbbd428abb4d       23 seconds ago      Running             kube-scheduler            0                   6ac864a2a2568       kube-scheduler-dockerenv-358134
	a1e5a579bcc8a       fcb0683e6bdbd       23 seconds ago      Running             kube-controller-manager   0                   969bbd5864780       kube-controller-manager-dockerenv-358134
	2cc7d1b9a773f       cd0f0ae0ec9e0       23 seconds ago      Running             kube-apiserver            0                   8af5bbfd3bd97       kube-apiserver-dockerenv-358134
	f0a7c54886892       27e3830e14027       23 seconds ago      Running             etcd                      0                   fc06d4c0b8b39       etcd-dockerenv-358134
	
	
	==> containerd <==
	Aug 19 13:05:33 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:33.921786840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 13:05:33 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:33.921848862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 13:05:33 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:33.921994797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 13:05:33 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:33.963897127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqmbg,Uid:edf7e16f-fe1c-49fe-a6b2-5c52368f453b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b058a0ebda7d206ae2521626bf2e75085eb66ec8c92be378cc95aefbc4302c49\""
	Aug 19 13:05:33 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:33.975396944Z" level=info msg="CreateContainer within sandbox \"b058a0ebda7d206ae2521626bf2e75085eb66ec8c92be378cc95aefbc4302c49\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Aug 19 13:05:33 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:33.995031205Z" level=info msg="CreateContainer within sandbox \"b058a0ebda7d206ae2521626bf2e75085eb66ec8c92be378cc95aefbc4302c49\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"003d05ca351bc7bb412d67092773d0b5c29f9eb243c18130e99f3fc57fda131c\""
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.003017737Z" level=info msg="StartContainer for \"003d05ca351bc7bb412d67092773d0b5c29f9eb243c18130e99f3fc57fda131c\""
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.028181386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-wjc7z,Uid:44490b70-2cbb-45bb-87fb-895a4c1bbe5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b0d420ecc120caccb475bb7e24f48e9fb83f07f4c930fb5de6f9dedce55dcc7\""
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.030910274Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\""
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.033806315Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.059461474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k5rnh,Uid:4f35287d-7816-4c9f-9911-a712c85df032,Namespace:kube-system,Attempt:0,}"
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.106732561Z" level=info msg="StartContainer for \"003d05ca351bc7bb412d67092773d0b5c29f9eb243c18130e99f3fc57fda131c\" returns successfully"
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.126416537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k5rnh,Uid:4f35287d-7816-4c9f-9911-a712c85df032,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\": failed to find network info for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\""
	Aug 19 13:05:34 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:34.299600149Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.298240477Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.299373050Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20240813-c6f155d6: active requests=0, bytes read=23615836"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.300613322Z" level=info msg="ImageCreate event name:\"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.304011432Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.304822742Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" with image id \"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51\", repo tag \"docker.io/kindest/kindnetd:v20240813-c6f155d6\", repo digest \"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\", size \"33309097\" in 1.273685992s"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.304957757Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" returns image reference \"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51\""
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.310095574Z" level=info msg="CreateContainer within sandbox \"4b0d420ecc120caccb475bb7e24f48e9fb83f07f4c930fb5de6f9dedce55dcc7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.329211192Z" level=info msg="CreateContainer within sandbox \"4b0d420ecc120caccb475bb7e24f48e9fb83f07f4c930fb5de6f9dedce55dcc7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"5449e5ffe10784f8cd1b3bede8fd1acc3767f98e140ff409d40ebd0a1f303b16\""
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.331703904Z" level=info msg="StartContainer for \"5449e5ffe10784f8cd1b3bede8fd1acc3767f98e140ff409d40ebd0a1f303b16\""
	Aug 19 13:05:35 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:35.413381764Z" level=info msg="StartContainer for \"5449e5ffe10784f8cd1b3bede8fd1acc3767f98e140ff409d40ebd0a1f303b16\" returns successfully"
	Aug 19 13:05:38 dockerenv-358134 containerd[809]: time="2024-08-19T13:05:38.903227225Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               dockerenv-358134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=dockerenv-358134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=dockerenv-358134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_05_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-358134
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:05:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:05:38 +0000   Mon, 19 Aug 2024 13:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:05:38 +0000   Mon, 19 Aug 2024 13:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:05:38 +0000   Mon, 19 Aug 2024 13:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:05:38 +0000   Mon, 19 Aug 2024 13:05:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-358134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 d34e36652faf471e80c5e2ea62f4209c
	  System UUID:                2c35bd45-6078-425c-bf84-8fa8b43167fe
	  Boot ID:                    8c9f4b3e-6245-4429-b714-db63b5b637f4
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-k5rnh                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12s
	  kube-system                 etcd-dockerenv-358134                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17s
	  kube-system                 kindnet-wjc7z                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12s
	  kube-system                 kube-apiserver-dockerenv-358134             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-358134    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-proxy-bqmbg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-scheduler-dockerenv-358134             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11s   kube-proxy       
	  Normal   Starting                 17s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17s   kubelet          Node dockerenv-358134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s   kubelet          Node dockerenv-358134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s   kubelet          Node dockerenv-358134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13s   node-controller  Node dockerenv-358134 event: Registered Node dockerenv-358134 in Controller
	
	
	==> dmesg <==
	[Aug19 11:09] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Aug19 12:28] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [f0a7c5488689223344dca9be32434c3cadc46c7f2054a45176b60fbc756221a3] <==
	{"level":"info","ts":"2024-08-19T13:05:22.346733Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:05:22.346970Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:05:22.346993Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:05:22.347058Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T13:05:22.347069Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T13:05:23.108982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T13:05:23.109179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T13:05:23.109296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-19T13:05:23.109411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:05:23.109491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T13:05:23.109583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T13:05:23.109665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T13:05:23.111993Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-358134 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:05:23.112291Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:05:23.112710Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:05:23.115831Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:05:23.120464Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:05:23.121529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-19T13:05:23.127805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:05:23.128744Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:05:23.128181Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:05:23.135916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:05:23.128437Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:05:23.139969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:05:23.148137Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 13:05:45 up 1 day,  2:48,  0 users,  load average: 1.77, 1.54, 2.04
	Linux dockerenv-358134 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [5449e5ffe10784f8cd1b3bede8fd1acc3767f98e140ff409d40ebd0a1f303b16] <==
	I0819 13:05:35.894893       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W0819 13:05:35.996325       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:05:35.996712       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:05:35.996519       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:05:35.996941       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 13:05:35.997497       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:05:35.997656       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 13:05:36.944889       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:05:36.944929       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:05:37.065695       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:05:37.065735       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 13:05:37.226372       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:05:37.226405       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 13:05:39.238083       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:05:39.238118       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 13:05:39.739672       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:05:39.739708       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:05:39.947117       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:05:39.947173       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 13:05:44.120037       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:05:44.120073       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:05:45.506089       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:05:45.506148       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 13:05:45.698883       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:05:45.698924       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [2cc7d1b9a773f33b1459c5a968b3877a35fd58ec6e5519918d119ec1fb95c110] <==
	I0819 13:05:25.944763       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 13:05:25.944771       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 13:05:25.944777       1 cache.go:39] Caches are synced for autoregister controller
	I0819 13:05:25.958074       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 13:05:25.990057       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:05:25.990090       1 policy_source.go:224] refreshing policies
	E0819 13:05:25.999116       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0819 13:05:26.029571       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 13:05:26.038600       1 controller.go:615] quota admission added evaluator for: namespaces
	I0819 13:05:26.202705       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 13:05:26.736551       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 13:05:26.742006       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 13:05:26.742035       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 13:05:27.361670       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 13:05:27.409461       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 13:05:27.552788       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 13:05:27.568652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0819 13:05:27.572378       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 13:05:27.581901       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 13:05:27.896971       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 13:05:28.290364       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 13:05:28.303269       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 13:05:28.329798       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 13:05:33.540702       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 13:05:33.657123       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a1e5a579bcc8a108aadd34203a1d92b5fca3a49fcf611e5b8e04e055bbb2880d] <==
	I0819 13:05:32.739640       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 13:05:32.746767       1 shared_informer.go:320] Caches are synced for GC
	I0819 13:05:32.746784       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0819 13:05:32.751187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="dockerenv-358134"
	I0819 13:05:32.774393       1 shared_informer.go:320] Caches are synced for deployment
	I0819 13:05:32.787092       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0819 13:05:32.824332       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:05:32.833929       1 shared_informer.go:320] Caches are synced for disruption
	I0819 13:05:32.836234       1 shared_informer.go:320] Caches are synced for expand
	I0819 13:05:32.836553       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 13:05:32.844547       1 shared_informer.go:320] Caches are synced for persistent volume
	I0819 13:05:32.847039       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 13:05:32.883869       1 shared_informer.go:320] Caches are synced for stateful set
	I0819 13:05:32.884819       1 shared_informer.go:320] Caches are synced for ephemeral
	I0819 13:05:32.887987       1 shared_informer.go:320] Caches are synced for PVC protection
	I0819 13:05:33.275696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:05:33.344829       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 13:05:33.345012       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 13:05:33.398384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="dockerenv-358134"
	I0819 13:05:33.760295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.283564ms"
	I0819 13:05:33.777714       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="16.781198ms"
	I0819 13:05:33.777886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="104.5µs"
	I0819 13:05:33.778031       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.9µs"
	I0819 13:05:33.788013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="80.746µs"
	I0819 13:05:38.915137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="dockerenv-358134"
	
	
	==> kube-proxy [003d05ca351bc7bb412d67092773d0b5c29f9eb243c18130e99f3fc57fda131c] <==
	I0819 13:05:34.157849       1 server_linux.go:66] "Using iptables proxy"
	I0819 13:05:34.256652       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 13:05:34.256728       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 13:05:34.275388       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 13:05:34.275446       1 server_linux.go:169] "Using iptables Proxier"
	I0819 13:05:34.277566       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 13:05:34.277994       1 server.go:483] "Version info" version="v1.31.0"
	I0819 13:05:34.278019       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:05:34.285844       1 config.go:197] "Starting service config controller"
	I0819 13:05:34.285892       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 13:05:34.285918       1 config.go:104] "Starting endpoint slice config controller"
	I0819 13:05:34.285924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 13:05:34.287555       1 config.go:326] "Starting node config controller"
	I0819 13:05:34.287716       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 13:05:34.386732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:05:34.386787       1 shared_informer.go:320] Caches are synced for service config
	I0819 13:05:34.389663       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b7e1027fc5808cd28cbdcbdfc28d1485159dc0644caa8ba9762b8b426204240b] <==
	W0819 13:05:25.961240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:05:25.962838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:25.961278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:05:25.962960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:25.961327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 13:05:25.963060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.826856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:05:26.826909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.843593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 13:05:26.843862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.890664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:05:26.890709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.893133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:05:26.893347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.898007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 13:05:26.898046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.940637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 13:05:26.940686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.986480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:05:26.986639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:26.986486       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:05:26.986922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:05:27.034917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 13:05:27.035196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 13:05:27.549910       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 13:05:29 dockerenv-358134 kubelet[1470]: I0819 13:05:29.440568    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-358134" podStartSLOduration=1.440520276 podStartE2EDuration="1.440520276s" podCreationTimestamp="2024-08-19 13:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 13:05:29.413010243 +0000 UTC m=+1.305785720" watchObservedRunningTime="2024-08-19 13:05:29.440520276 +0000 UTC m=+1.333295745"
	Aug 19 13:05:29 dockerenv-358134 kubelet[1470]: I0819 13:05:29.456459    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-358134" podStartSLOduration=1.456431703 podStartE2EDuration="1.456431703s" podCreationTimestamp="2024-08-19 13:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 13:05:29.441022714 +0000 UTC m=+1.333798199" watchObservedRunningTime="2024-08-19 13:05:29.456431703 +0000 UTC m=+1.349207172"
	Aug 19 13:05:29 dockerenv-358134 kubelet[1470]: I0819 13:05:29.484802    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-358134" podStartSLOduration=1.484782847 podStartE2EDuration="1.484782847s" podCreationTimestamp="2024-08-19 13:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 13:05:29.456960258 +0000 UTC m=+1.349735727" watchObservedRunningTime="2024-08-19 13:05:29.484782847 +0000 UTC m=+1.377558324"
	Aug 19 13:05:32 dockerenv-358134 kubelet[1470]: I0819 13:05:32.777316    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fe9f4f55-bc94-4e7e-904c-bfe9e08c0d0a-tmp\") pod \"storage-provisioner\" (UID: \"fe9f4f55-bc94-4e7e-904c-bfe9e08c0d0a\") " pod="kube-system/storage-provisioner"
	Aug 19 13:05:32 dockerenv-358134 kubelet[1470]: I0819 13:05:32.777383    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjb4k\" (UniqueName: \"kubernetes.io/projected/fe9f4f55-bc94-4e7e-904c-bfe9e08c0d0a-kube-api-access-bjb4k\") pod \"storage-provisioner\" (UID: \"fe9f4f55-bc94-4e7e-904c-bfe9e08c0d0a\") " pod="kube-system/storage-provisioner"
	Aug 19 13:05:32 dockerenv-358134 kubelet[1470]: I0819 13:05:32.888873    1470 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.564453    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.564429874 podStartE2EDuration="3.564429874s" podCreationTimestamp="2024-08-19 13:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 13:05:33.361300722 +0000 UTC m=+5.254076199" watchObservedRunningTime="2024-08-19 13:05:33.564429874 +0000 UTC m=+5.457205343"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.584858    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlx7\" (UniqueName: \"kubernetes.io/projected/44490b70-2cbb-45bb-87fb-895a4c1bbe5b-kube-api-access-zqlx7\") pod \"kindnet-wjc7z\" (UID: \"44490b70-2cbb-45bb-87fb-895a4c1bbe5b\") " pod="kube-system/kindnet-wjc7z"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.585089    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44490b70-2cbb-45bb-87fb-895a4c1bbe5b-cni-cfg\") pod \"kindnet-wjc7z\" (UID: \"44490b70-2cbb-45bb-87fb-895a4c1bbe5b\") " pod="kube-system/kindnet-wjc7z"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.585214    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44490b70-2cbb-45bb-87fb-895a4c1bbe5b-lib-modules\") pod \"kindnet-wjc7z\" (UID: \"44490b70-2cbb-45bb-87fb-895a4c1bbe5b\") " pod="kube-system/kindnet-wjc7z"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.585312    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44490b70-2cbb-45bb-87fb-895a4c1bbe5b-xtables-lock\") pod \"kindnet-wjc7z\" (UID: \"44490b70-2cbb-45bb-87fb-895a4c1bbe5b\") " pod="kube-system/kindnet-wjc7z"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.686124    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edf7e16f-fe1c-49fe-a6b2-5c52368f453b-kube-proxy\") pod \"kube-proxy-bqmbg\" (UID: \"edf7e16f-fe1c-49fe-a6b2-5c52368f453b\") " pod="kube-system/kube-proxy-bqmbg"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.686348    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edf7e16f-fe1c-49fe-a6b2-5c52368f453b-xtables-lock\") pod \"kube-proxy-bqmbg\" (UID: \"edf7e16f-fe1c-49fe-a6b2-5c52368f453b\") " pod="kube-system/kube-proxy-bqmbg"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.686436    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edf7e16f-fe1c-49fe-a6b2-5c52368f453b-lib-modules\") pod \"kube-proxy-bqmbg\" (UID: \"edf7e16f-fe1c-49fe-a6b2-5c52368f453b\") " pod="kube-system/kube-proxy-bqmbg"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.686527    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmbm\" (UniqueName: \"kubernetes.io/projected/edf7e16f-fe1c-49fe-a6b2-5c52368f453b-kube-api-access-5nmbm\") pod \"kube-proxy-bqmbg\" (UID: \"edf7e16f-fe1c-49fe-a6b2-5c52368f453b\") " pod="kube-system/kube-proxy-bqmbg"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.787094    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f35287d-7816-4c9f-9911-a712c85df032-config-volume\") pod \"coredns-6f6b679f8f-k5rnh\" (UID: \"4f35287d-7816-4c9f-9911-a712c85df032\") " pod="kube-system/coredns-6f6b679f8f-k5rnh"
	Aug 19 13:05:33 dockerenv-358134 kubelet[1470]: I0819 13:05:33.787186    1470 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85tf\" (UniqueName: \"kubernetes.io/projected/4f35287d-7816-4c9f-9911-a712c85df032-kube-api-access-h85tf\") pod \"coredns-6f6b679f8f-k5rnh\" (UID: \"4f35287d-7816-4c9f-9911-a712c85df032\") " pod="kube-system/coredns-6f6b679f8f-k5rnh"
	Aug 19 13:05:34 dockerenv-358134 kubelet[1470]: E0819 13:05:34.126776    1470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\": failed to find network info for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\""
	Aug 19 13:05:34 dockerenv-358134 kubelet[1470]: E0819 13:05:34.126848    1470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\": failed to find network info for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\"" pod="kube-system/coredns-6f6b679f8f-k5rnh"
	Aug 19 13:05:34 dockerenv-358134 kubelet[1470]: E0819 13:05:34.126868    1470 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\": failed to find network info for sandbox \"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\"" pod="kube-system/coredns-6f6b679f8f-k5rnh"
	Aug 19 13:05:34 dockerenv-358134 kubelet[1470]: E0819 13:05:34.127104    1470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k5rnh_kube-system(4f35287d-7816-4c9f-9911-a712c85df032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k5rnh_kube-system(4f35287d-7816-4c9f-9911-a712c85df032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\\\": failed to find network info for sandbox \\\"c45357730a7923b9fe188ed73d8906cd9b9be47dabd8c5fd4c80a67bef1126c2\\\"\"" pod="kube-system/coredns-6f6b679f8f-k5rnh" podUID="4f35287d-7816-4c9f-9911-a712c85df032"
	Aug 19 13:05:34 dockerenv-358134 kubelet[1470]: I0819 13:05:34.502418    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqmbg" podStartSLOduration=1.502396748 podStartE2EDuration="1.502396748s" podCreationTimestamp="2024-08-19 13:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 13:05:34.357166497 +0000 UTC m=+6.249941974" watchObservedRunningTime="2024-08-19 13:05:34.502396748 +0000 UTC m=+6.395172225"
	Aug 19 13:05:37 dockerenv-358134 kubelet[1470]: I0819 13:05:37.790677    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wjc7z" podStartSLOduration=3.514674902 podStartE2EDuration="4.79065675s" podCreationTimestamp="2024-08-19 13:05:33 +0000 UTC" firstStartedPulling="2024-08-19 13:05:34.02990269 +0000 UTC m=+5.922678158" lastFinishedPulling="2024-08-19 13:05:35.305884529 +0000 UTC m=+7.198660006" observedRunningTime="2024-08-19 13:05:36.367860638 +0000 UTC m=+8.260636115" watchObservedRunningTime="2024-08-19 13:05:37.79065675 +0000 UTC m=+9.683432218"
	Aug 19 13:05:38 dockerenv-358134 kubelet[1470]: I0819 13:05:38.902751    1470 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 13:05:38 dockerenv-358134 kubelet[1470]: I0819 13:05:38.903824    1470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [136d326d81414b799e34e26cee08c25d4b156470f74a7791929d51dca6960c70] <==
	I0819 13:05:33.249112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-358134 -n dockerenv-358134
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-358134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-6f6b679f8f-k5rnh
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-358134 describe pod coredns-6f6b679f8f-k5rnh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-358134 describe pod coredns-6f6b679f8f-k5rnh: exit status 1 (101.054863ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6f6b679f8f-k5rnh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-358134 describe pod coredns-6f6b679f8f-k5rnh: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-358134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-358134
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-358134: (1.995760828s)
--- FAIL: TestDockerEnvContainerd (51.35s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ef489c73-f956-477e-99bf-518546dd963f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004785078s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-893834 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-893834 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-893834 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-893834 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [874060bc-d4e3-4089-9726-d00189c850f2] Pending
helpers_test.go:344: "sp-pod" [874060bc-d4e3-4089-9726-d00189c850f2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-893834 -n functional-893834
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-08-19 13:12:09.969526645 +0000 UTC m=+980.328478863
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-893834 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-893834 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-893834/192.168.49.2
Start Time:       Mon, 19 Aug 2024 13:09:09 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsjhn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-tsjhn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-893834
Normal   Pulling    98s (x4 over 3m)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     97s (x4 over 3m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     97s (x4 over 3m)  kubelet            Error: ErrImagePull
Warning  Failed     72s (x6 over 3m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    58s (x7 over 3m)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-893834 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-893834 logs sp-pod -n default: exit status 1 (107.052988ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-893834 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-893834
helpers_test.go:235: (dbg) docker inspect functional-893834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ea79d19d97b54cf5ee3b4131d01c2a1c9e54c1ac6489f3a07d8f240c8ce0e59",
	        "Created": "2024-08-19T13:06:50.60462856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4170771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T13:06:50.785033423Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/5ea79d19d97b54cf5ee3b4131d01c2a1c9e54c1ac6489f3a07d8f240c8ce0e59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ea79d19d97b54cf5ee3b4131d01c2a1c9e54c1ac6489f3a07d8f240c8ce0e59/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ea79d19d97b54cf5ee3b4131d01c2a1c9e54c1ac6489f3a07d8f240c8ce0e59/hosts",
	        "LogPath": "/var/lib/docker/containers/5ea79d19d97b54cf5ee3b4131d01c2a1c9e54c1ac6489f3a07d8f240c8ce0e59/5ea79d19d97b54cf5ee3b4131d01c2a1c9e54c1ac6489f3a07d8f240c8ce0e59-json.log",
	        "Name": "/functional-893834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-893834:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-893834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab1e98044ee50005b9db07961670deffeb9d99fda63ad358d7f0e6afbf88ff50-init/diff:/var/lib/docker/overlay2/f9730c920ad297aa3b42f5a0ebbe1c9311721ca848f3268205322d3e26bf32e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab1e98044ee50005b9db07961670deffeb9d99fda63ad358d7f0e6afbf88ff50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab1e98044ee50005b9db07961670deffeb9d99fda63ad358d7f0e6afbf88ff50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab1e98044ee50005b9db07961670deffeb9d99fda63ad358d7f0e6afbf88ff50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-893834",
	                "Source": "/var/lib/docker/volumes/functional-893834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-893834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-893834",
	                "name.minikube.sigs.k8s.io": "functional-893834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d43128144e80afd214442fb8492a3ee8c725214a2f89e3552b5932e5bc2e62d",
	            "SandboxKey": "/var/run/docker/netns/8d43128144e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38275"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38276"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38279"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38277"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38278"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-893834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "26a6d02e0d4777ae1ca44bc1db83d30b99d8b2e4539da514d424a10abb47db11",
	                    "EndpointID": "507579062a45f30a6cf95e4032e744bcc4b4c2881e0f69b827e1e30aed04f910",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-893834",
	                        "5ea79d19d97b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-893834 -n functional-893834
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 logs -n 25: (1.760139558s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-893834 image load --daemon                                      | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | kicbase/echo-server:functional-893834                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-893834 image ls                                                 | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	| image          | functional-893834 image save kicbase/echo-server:functional-893834         | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-893834 image rm                                                 | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | kicbase/echo-server:functional-893834                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-893834 image ls                                                 | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	| image          | functional-893834 image load                                               | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-893834 image ls                                                 | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	| image          | functional-893834 image save --daemon                                      | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | kicbase/echo-server:functional-893834                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /etc/ssl/certs/4146547.pem                                                 |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /usr/share/ca-certificates/4146547.pem                                     |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /etc/ssl/certs/41465472.pem                                                |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /usr/share/ca-certificates/41465472.pem                                    |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh sudo cat                                             | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:09 UTC | 19 Aug 24 13:09 UTC |
	|                | /etc/test/nested/copy/4146547/hosts                                        |                   |         |         |                     |                     |
	| image          | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-893834 ssh pgrep                                                | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-893834 image build -t                                           | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | localhost/my-image:functional-893834                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-893834 image ls                                                 | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	| image          | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-893834                                                          | functional-893834 | jenkins | v1.33.1 | 19 Aug 24 13:10 UTC | 19 Aug 24 13:10 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:09:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:09:43.904746 4181030 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:09:43.904949 4181030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:09:43.904978 4181030 out.go:358] Setting ErrFile to fd 2...
	I0819 13:09:43.904998 4181030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:09:43.905282 4181030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:09:43.905683 4181030 out.go:352] Setting JSON to false
	I0819 13:09:43.906708 4181030 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":96728,"bootTime":1723976256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:09:43.906832 4181030 start.go:139] virtualization:  
	I0819 13:09:43.910113 4181030 out.go:177] * [functional-893834] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 13:09:43.913645 4181030 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:09:43.913723 4181030 notify.go:220] Checking for updates...
	I0819 13:09:43.919005 4181030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:09:43.921725 4181030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:09:43.924586 4181030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:09:43.927273 4181030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:09:43.930151 4181030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:09:43.933298 4181030 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:09:43.933867 4181030 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:09:43.956771 4181030 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:09:43.956890 4181030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:09:44.018170 4181030 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 13:09:44.004856777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:09:44.018289 4181030 docker.go:307] overlay module found
	I0819 13:09:44.023111 4181030 out.go:177] * Using the docker driver based on existing profile
	I0819 13:09:44.025903 4181030 start.go:297] selected driver: docker
	I0819 13:09:44.025947 4181030 start.go:901] validating driver "docker" against &{Name:functional-893834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-893834 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:09:44.026091 4181030 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:09:44.026204 4181030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:09:44.084678 4181030 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 13:09:44.074607028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:09:44.085116 4181030 cni.go:84] Creating CNI manager for ""
	I0819 13:09:44.085133 4181030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:09:44.085181 4181030 start.go:340] cluster config:
	{Name:functional-893834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-893834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:09:44.088134 4181030 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	20354cf222fa8       a422e0e982356       2 minutes ago       Running             dashboard-metrics-scraper   0                   abc12250c010c       dashboard-metrics-scraper-c5db448b4-ln542
	144173e3b346a       20b332c9a70d8       2 minutes ago       Running             kubernetes-dashboard        0                   2a1289eeb6230       kubernetes-dashboard-695b96c756-fcw2t
	5dc947c3ea3b3       1611cd07b61d5       2 minutes ago       Exited              mount-munger                0                   d6901bd9d53ca       busybox-mount
	d4614da1da453       72565bf5bbedf       2 minutes ago       Running             echoserver-arm              0                   caf774444f337       hello-node-64b4f8f9ff-svk74
	6e3c29e4b0fab       72565bf5bbedf       2 minutes ago       Running             echoserver-arm              0                   2f2a8f9bfdc7a       hello-node-connect-65d86f57f4-m5dq8
	e0e6d1fcc74e2       70594c812316a       3 minutes ago       Running             nginx                       0                   d23991ebe3520       nginx-svc
	d022c6d9a28b7       cd0f0ae0ec9e0       3 minutes ago       Running             kube-apiserver              0                   4212c4ea9bb7a       kube-apiserver-functional-893834
	2e56b7742f0a3       fcb0683e6bdbd       3 minutes ago       Running             kube-controller-manager     2                   25ddae1b389fb       kube-controller-manager-functional-893834
	5693efb956dec       6a23fa8fd2b78       3 minutes ago       Running             kindnet-cni                 1                   8be286c19f0bf       kindnet-wdj7g
	cc2ecbc741059       71d55d66fd4ee       3 minutes ago       Running             kube-proxy                  1                   c0abda1048da4       kube-proxy-p5vvj
	538927581d7ea       27e3830e14027       3 minutes ago       Running             etcd                        1                   10546d648ecb5       etcd-functional-893834
	94fce972143b4       fcb0683e6bdbd       3 minutes ago       Exited              kube-controller-manager     1                   25ddae1b389fb       kube-controller-manager-functional-893834
	3a39a1c789be7       fbbbd428abb4d       3 minutes ago       Running             kube-scheduler              1                   fa1a309baa13a       kube-scheduler-functional-893834
	c1121081edc47       2437cf7621777       3 minutes ago       Running             coredns                     1                   68b1c6e5c9281       coredns-6f6b679f8f-8w4zn
	70fabff5a7fe6       ba04bb24b9575       3 minutes ago       Running             storage-provisioner         1                   0c8fc0daf6886       storage-provisioner
	f3d3aa3a97652       2437cf7621777       4 minutes ago       Exited              coredns                     0                   68b1c6e5c9281       coredns-6f6b679f8f-8w4zn
	f8fb76197b58e       6a23fa8fd2b78       4 minutes ago       Exited              kindnet-cni                 0                   8be286c19f0bf       kindnet-wdj7g
	03bf383b35387       ba04bb24b9575       4 minutes ago       Exited              storage-provisioner         0                   0c8fc0daf6886       storage-provisioner
	1a1cbda14c34e       27e3830e14027       5 minutes ago       Exited              etcd                        0                   10546d648ecb5       etcd-functional-893834
	1422bdedc620b       fbbbd428abb4d       5 minutes ago       Exited              kube-scheduler              0                   fa1a309baa13a       kube-scheduler-functional-893834
	
	
	==> containerd <==
	Aug 19 13:09:56 functional-893834 containerd[3478]: time="2024-08-19T13:09:56.296124942Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Aug 19 13:09:56 functional-893834 containerd[3478]: time="2024-08-19T13:09:56.303865398Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-893834\" returns successfully"
	Aug 19 13:09:56 functional-893834 containerd[3478]: time="2024-08-19T13:09:56.865188986Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-893834\""
	Aug 19 13:09:56 functional-893834 containerd[3478]: time="2024-08-19T13:09:56.868784516Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:09:56 functional-893834 containerd[3478]: time="2024-08-19T13:09:56.869165395Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-893834\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.392480590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.392604003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.392618739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.392792129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.524614749Z" level=info msg="shim disconnected" id=n7b6qftwnnxoe49zqw7acn9pw namespace=k8s.io
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.525266708Z" level=warning msg="cleaning up after shim disconnected" id=n7b6qftwnnxoe49zqw7acn9pw namespace=k8s.io
	Aug 19 13:10:03 functional-893834 containerd[3478]: time="2024-08-19T13:10:03.525403585Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 13:10:04 functional-893834 containerd[3478]: time="2024-08-19T13:10:04.175213803Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-893834\""
	Aug 19 13:10:04 functional-893834 containerd[3478]: time="2024-08-19T13:10:04.181689775Z" level=info msg="ImageCreate event name:\"sha256:bd18ad283316efc2a60f11d2f5336471ce303d430ed54006b371a1d021f23514\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:10:04 functional-893834 containerd[3478]: time="2024-08-19T13:10:04.184090334Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-893834\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 13:10:32 functional-893834 containerd[3478]: time="2024-08-19T13:10:32.500243649Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Aug 19 13:10:32 functional-893834 containerd[3478]: time="2024-08-19T13:10:32.502182237Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Aug 19 13:10:32 functional-893834 containerd[3478]: time="2024-08-19T13:10:32.678277160Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Aug 19 13:10:33 functional-893834 containerd[3478]: time="2024-08-19T13:10:33.033636618Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 19 13:10:33 functional-893834 containerd[3478]: time="2024-08-19T13:10:33.033681376Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=11041"
	Aug 19 13:11:54 functional-893834 containerd[3478]: time="2024-08-19T13:11:54.500155122Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Aug 19 13:11:54 functional-893834 containerd[3478]: time="2024-08-19T13:11:54.501398149Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Aug 19 13:11:54 functional-893834 containerd[3478]: time="2024-08-19T13:11:54.676409904Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Aug 19 13:11:55 functional-893834 containerd[3478]: time="2024-08-19T13:11:55.146934274Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 19 13:11:55 functional-893834 containerd[3478]: time="2024-08-19T13:11:55.146994738Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21314"
	
	
	==> coredns [c1121081edc479097e06c2091c9d2bbedcd928ce45f956a32c490a9673d1736c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59905 - 38750 "HINFO IN 5023349560480375431.1688341748117639850. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015071769s
	
	
	==> coredns [f3d3aa3a97652bdb58f8e3e4a47b204709f7aa816ebd8875140e9f36eb86aa71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58818 - 52507 "HINFO IN 4659280228587722270.7227191423933114678. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049755295s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-893834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-893834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=functional-893834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_07_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:07:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-893834
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:10:36 +0000   Mon, 19 Aug 2024 13:07:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:10:36 +0000   Mon, 19 Aug 2024 13:07:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:10:36 +0000   Mon, 19 Aug 2024 13:07:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:10:36 +0000   Mon, 19 Aug 2024 13:07:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-893834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 a40396488e2443238f0120bad7da8e2d
	  System UUID:                0d35ad3b-2dc3-4aee-ae3e-d679645e8d6b
	  Boot ID:                    8c9f4b3e-6245-4429-b714-db63b5b637f4
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-svk74                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  default                     hello-node-connect-65d86f57f4-m5dq8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-6f6b679f8f-8w4zn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m49s
	  kube-system                 etcd-functional-893834                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m56s
	  kube-system                 kindnet-wdj7g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m49s
	  kube-system                 kube-apiserver-functional-893834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 kube-controller-manager-functional-893834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-p5vvj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-scheduler-functional-893834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-ln542    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-fcw2t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m47s                  kube-proxy       
	  Normal   Starting                 3m52s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m1s (x8 over 5m1s)    kubelet          Node functional-893834 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m1s (x7 over 5m1s)    kubelet          Node functional-893834 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m1s (x7 over 5m1s)    kubelet          Node functional-893834 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 4m55s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m55s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    4m54s                  kubelet          Node functional-893834 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  4m54s                  kubelet          Node functional-893834 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     4m54s                  kubelet          Node functional-893834 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m50s                  node-controller  Node functional-893834 event: Registered Node functional-893834 in Controller
	  Normal   Starting                 3m42s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m42s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node functional-893834 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m42s (x7 over 3m42s)  kubelet          Node functional-893834 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m42s (x7 over 3m42s)  kubelet          Node functional-893834 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m34s                  node-controller  Node functional-893834 event: Registered Node functional-893834 in Controller
	
	
	==> dmesg <==
	[Aug19 12:28] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [1a1cbda14c34e3c8e0edbd3902e0a30192115a3d8d904bde5d9b5bac16d4ee54] <==
	{"level":"info","ts":"2024-08-19T13:07:12.128049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:12.128082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T13:07:12.137531Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:12.139994Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-893834 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:07:12.140042Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:07:12.140428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:07:12.140704Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:12.140781Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:12.140807Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T13:07:12.140839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:07:12.140849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:07:12.141456Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:07:12.142364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:07:12.172387Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:07:12.178945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-19T13:08:16.173453Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T13:08:16.173507Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-893834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-08-19T13:08:16.173631Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T13:08:16.173721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T13:08:16.191620Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T13:08:16.191658Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T13:08:16.191702Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-08-19T13:08:16.193378Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T13:08:16.193488Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T13:08:16.193514Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-893834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [538927581d7ea9823a2da236259d5bca488d8bc5aebe21b747cd67ce0810988e] <==
	{"level":"info","ts":"2024-08-19T13:08:16.882154Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:16.884054Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T13:08:16.884174Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T13:08:16.884267Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T13:08:16.884682Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T13:08:16.884889Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T13:08:16.885088Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T13:08:16.886331Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T13:08:16.886459Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T13:08:18.662321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T13:08:18.662377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T13:08:18.662451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T13:08:18.662642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:18.662798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:18.662839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:18.662917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-08-19T13:08:18.664188Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-893834 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T13:08:18.664531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:08:18.664958Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T13:08:18.665896Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:18.666985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T13:08:18.667843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T13:08:18.667882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T13:08:18.680467Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T13:08:18.978212Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 13:12:11 up 1 day,  2:54,  0 users,  load average: 0.91, 1.44, 1.85
	Linux functional-893834 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [5693efb956dec399830cee98d87e990b9b07e9610459ce5c88c9b115ed94cc2f] <==
	I0819 13:10:57.300899       1 main.go:299] handling current node
	I0819 13:11:07.300460       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:11:07.300500       1 main.go:299] handling current node
	W0819 13:11:13.691088       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:11:13.691123       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:11:14.424448       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:11:14.424484       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:11:17.300545       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:11:17.300586       1 main.go:299] handling current node
	I0819 13:11:27.300286       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:11:27.300324       1 main.go:299] handling current node
	W0819 13:11:32.024951       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:11:32.024986       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 13:11:37.300764       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:11:37.300806       1 main.go:299] handling current node
	W0819 13:11:46.903337       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:11:46.903465       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 13:11:47.300365       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:11:47.300405       1 main.go:299] handling current node
	W0819 13:11:48.585008       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:11:48.585043       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:11:57.300571       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:11:57.300609       1 main.go:299] handling current node
	I0819 13:12:07.300771       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:12:07.300811       1 main.go:299] handling current node
	
	
	==> kindnet [f8fb76197b58e6997090706fc34bdf48a328e570684533022d02ba78e4459edc] <==
	E0819 13:07:32.429264       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 13:07:33.444379       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:07:33.444414       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 13:07:35.393694       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:07:35.393761       1 main.go:299] handling current node
	W0819 13:07:39.450824       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:07:39.451024       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 13:07:41.640687       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:07:41.640726       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:07:43.174220       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:07:43.174326       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:07:45.392783       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:07:45.392875       1 main.go:299] handling current node
	I0819 13:07:55.393226       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:07:55.393262       1 main.go:299] handling current node
	W0819 13:07:55.497492       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:07:55.497529       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 13:08:05.393260       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:08:05.393498       1 main.go:299] handling current node
	W0819 13:08:05.525962       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:08:05.526010       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:08:07.698719       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:08:07.698760       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:08:15.393134       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 13:08:15.393173       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d022c6d9a28b788d0987fad3b992baf072bccf5f8d1e17235d138c8b13837de2] <==
	I0819 13:08:33.928762       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 13:08:33.928962       1 policy_source.go:224] refreshing policies
	I0819 13:08:33.948358       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 13:08:33.948414       1 aggregator.go:171] initial CRD sync complete...
	I0819 13:08:33.948423       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 13:08:33.948429       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 13:08:33.948434       1 cache.go:39] Caches are synced for autoregister controller
	I0819 13:08:33.979470       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 13:08:34.694291       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 13:08:34.930893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0819 13:08:34.932333       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 13:08:34.937838       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 13:08:35.576681       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 13:08:35.707135       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 13:08:35.729216       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 13:08:35.811222       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 13:08:35.819387       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 13:08:56.978994       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.248.96"}
	I0819 13:09:03.976607       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.214.129"}
	I0819 13:09:12.424542       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0819 13:09:12.547227       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.253.216"}
	I0819 13:09:22.171569       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.118.144"}
	I0819 13:09:45.470973       1 controller.go:615] quota admission added evaluator for: namespaces
	I0819 13:09:45.786495       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.120.105"}
	I0819 13:09:45.826865       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.1.108"}
	
	
	==> kube-controller-manager [2e56b7742f0a3f86268b967393d92cb7a90751b43393a3d1b97d7c4009224338] <==
	E0819 13:09:45.601587       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.613119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="41.485828ms"
	E0819 13:09:45.614630       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.616705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.829729ms"
	E0819 13:09:45.617017       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.630341       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.293411ms"
	E0819 13:09:45.630587       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.631936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="12.579671ms"
	E0819 13:09:45.632111       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.645960       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.966308ms"
	E0819 13:09:45.646175       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.648249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.483851ms"
	E0819 13:09:45.648519       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0819 13:09:45.677781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="30.29453ms"
	I0819 13:09:45.701924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="22.25239ms"
	I0819 13:09:45.702838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="170.034µs"
	I0819 13:09:45.779275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="47.813099ms"
	I0819 13:09:45.798682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.357885ms"
	I0819 13:09:45.799008       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="90.068µs"
	I0819 13:09:49.837612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="16.490432ms"
	I0819 13:09:49.838487       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.022µs"
	I0819 13:09:50.834965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.36901ms"
	I0819 13:09:50.835126       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="47.073µs"
	I0819 13:10:06.219761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-893834"
	I0819 13:10:36.424915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-893834"
	
	
	==> kube-controller-manager [94fce972143b452ebb596325e25d71e6b50bd7cd0fede8bc60e1f36692667771] <==
	I0819 13:08:19.033191       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:08:19.306787       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I0819 13:08:19.307261       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0819 13:08:19.338650       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I0819 13:08:19.339284       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0819 13:08:19.340725       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0819 13:08:19.352474       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I0819 13:08:19.352720       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I0819 13:08:19.352742       1 shared_informer.go:313] Waiting for caches to sync for job
	I0819 13:08:19.358437       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I0819 13:08:19.358685       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0819 13:08:19.358707       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0819 13:08:19.408117       1 shared_informer.go:320] Caches are synced for tokens
	I0819 13:08:19.431993       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0819 13:08:19.432312       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I0819 13:08:19.432601       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0819 13:08:19.432565       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0819 13:08:19.433677       1 shared_informer.go:313] Waiting for caches to sync for node
	I0819 13:08:19.433839       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	E0819 13:08:19.441106       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0819 13:08:19.441165       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0819 13:08:19.448440       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0819 13:08:19.448567       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0819 13:08:19.448588       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	F0819 13:08:20.453042       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pvc-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [cc2ecbc7410592a27e5e648c59311e51cd4bfa70b2a90833209007b7505361bf] <==
	E0819 13:08:19.472729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	E0819 13:08:19.472729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:19.472888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:19.472975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:20.291259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:20.291317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:20.387524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:20.387590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:20.733400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:20.733462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:22.704410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:22.704588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:22.758594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:22.758647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:23.191185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:23.191244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:27.780855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:27.780910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-893834&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:28.617200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:28.617272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0819 13:08:28.764343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0819 13:08:28.764418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	I0819 13:08:35.866154       1 shared_informer.go:320] Caches are synced for node config
	I0819 13:08:36.765041       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 13:08:37.065528       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1422bdedc620b4edb25e50689a78e98ef0e18db996ba686d4d28815590cdc4fc] <==
	W0819 13:07:14.646235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 13:07:14.647705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:14.646274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:07:14.647914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:14.646285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:07:14.648093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:15.436790       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 13:07:15.437045       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 13:07:15.450715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:07:15.450979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:15.549044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:07:15.549320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:15.611055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:07:15.611101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:15.627853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:07:15.627899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 13:07:15.741156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 13:07:15.741377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 13:07:15.741420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0819 13:07:15.741393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 13:07:18.105382       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:08:16.251269       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0819 13:08:16.251407       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 13:08:16.251524       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0819 13:08:16.251743       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3a39a1c789be77c77889b15f38ea2bc151fa63d7d07568d73490605536374369] <==
	I0819 13:08:17.370231       1 serving.go:386] Generated self-signed cert in-memory
	I0819 13:08:19.362968       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 13:08:19.363213       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 13:08:19.375403       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0819 13:08:19.375897       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0819 13:08:19.375704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 13:08:19.376333       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:08:19.375725       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0819 13:08:19.377341       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 13:08:19.377357       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 13:08:19.377366       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0819 13:08:19.476826       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0819 13:08:19.477224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:08:19.478277       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0819 13:08:33.907819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0819 13:08:33.908063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 13:08:33.908203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 13:09:45 functional-893834 kubelet[4479]: I0819 13:09:45.833225    4479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb47p\" (UniqueName: \"kubernetes.io/projected/45bfdff5-178c-4482-a259-58bc48c68994-kube-api-access-bb47p\") pod \"kubernetes-dashboard-695b96c756-fcw2t\" (UID: \"45bfdff5-178c-4482-a259-58bc48c68994\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-fcw2t"
	Aug 19 13:09:45 functional-893834 kubelet[4479]: I0819 13:09:45.934049    4479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdrl\" (UniqueName: \"kubernetes.io/projected/d82107d8-3068-4c3a-b366-88da91d652c4-kube-api-access-mcdrl\") pod \"dashboard-metrics-scraper-c5db448b4-ln542\" (UID: \"d82107d8-3068-4c3a-b366-88da91d652c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-ln542"
	Aug 19 13:09:45 functional-893834 kubelet[4479]: I0819 13:09:45.934161    4479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d82107d8-3068-4c3a-b366-88da91d652c4-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-ln542\" (UID: \"d82107d8-3068-4c3a-b366-88da91d652c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-ln542"
	Aug 19 13:09:50 functional-893834 kubelet[4479]: E0819 13:09:50.608622    4479 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 19 13:09:50 functional-893834 kubelet[4479]: E0819 13:09:50.608685    4479 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 19 13:09:50 functional-893834 kubelet[4479]: E0819 13:09:50.608790    4479 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsjhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(874060bc-d4e3-4089-9726-d00189c850f2): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Aug 19 13:09:50 functional-893834 kubelet[4479]: E0819 13:09:50.610401    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:09:50 functional-893834 kubelet[4479]: I0819 13:09:50.832237    4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-ln542" podStartSLOduration=1.943505376 podStartE2EDuration="5.83221701s" podCreationTimestamp="2024-08-19 13:09:45 +0000 UTC" firstStartedPulling="2024-08-19 13:09:46.203307073 +0000 UTC m=+76.835415712" lastFinishedPulling="2024-08-19 13:09:50.092018706 +0000 UTC m=+80.724127346" observedRunningTime="2024-08-19 13:09:50.832039953 +0000 UTC m=+81.464148593" watchObservedRunningTime="2024-08-19 13:09:50.83221701 +0000 UTC m=+81.464325650"
	Aug 19 13:09:50 functional-893834 kubelet[4479]: I0819 13:09:50.832530    4479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-fcw2t" podStartSLOduration=2.974187826 podStartE2EDuration="5.832520582s" podCreationTimestamp="2024-08-19 13:09:45 +0000 UTC" firstStartedPulling="2024-08-19 13:09:46.130141549 +0000 UTC m=+76.762250189" lastFinishedPulling="2024-08-19 13:09:48.988474305 +0000 UTC m=+79.620582945" observedRunningTime="2024-08-19 13:09:49.830232475 +0000 UTC m=+80.462341131" watchObservedRunningTime="2024-08-19 13:09:50.832520582 +0000 UTC m=+81.464629222"
	Aug 19 13:10:05 functional-893834 kubelet[4479]: E0819 13:10:05.500008    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:10:17 functional-893834 kubelet[4479]: E0819 13:10:17.499661    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:10:33 functional-893834 kubelet[4479]: E0819 13:10:33.034113    4479 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 19 13:10:33 functional-893834 kubelet[4479]: E0819 13:10:33.034185    4479 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 19 13:10:33 functional-893834 kubelet[4479]: E0819 13:10:33.034298    4479 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsjhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(874060bc-d4e3-4089-9726-d00189c850f2): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Aug 19 13:10:33 functional-893834 kubelet[4479]: E0819 13:10:33.035758    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:10:44 functional-893834 kubelet[4479]: E0819 13:10:44.500042    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:10:58 functional-893834 kubelet[4479]: E0819 13:10:58.500345    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:11:12 functional-893834 kubelet[4479]: E0819 13:11:12.499490    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:11:24 functional-893834 kubelet[4479]: E0819 13:11:24.500021    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:11:39 functional-893834 kubelet[4479]: E0819 13:11:39.500645    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:11:55 functional-893834 kubelet[4479]: E0819 13:11:55.147476    4479 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 19 13:11:55 functional-893834 kubelet[4479]: E0819 13:11:55.147555    4479 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Aug 19 13:11:55 functional-893834 kubelet[4479]: E0819 13:11:55.147692    4479 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsjhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(874060bc-d4e3-4089-9726-d00189c850f2): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Aug 19 13:11:55 functional-893834 kubelet[4479]: E0819 13:11:55.149087    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	Aug 19 13:12:06 functional-893834 kubelet[4479]: E0819 13:12:06.499886    4479 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="874060bc-d4e3-4089-9726-d00189c850f2"
	
	
	==> kubernetes-dashboard [144173e3b346adfaec17d6bad6b4c58d75706f20fd0ab1b080e083933173d9d2] <==
	2024/08/19 13:09:49 Using namespace: kubernetes-dashboard
	2024/08/19 13:09:49 Using in-cluster config to connect to apiserver
	2024/08/19 13:09:49 Using secret token for csrf signing
	2024/08/19 13:09:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/19 13:09:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/19 13:09:49 Successful initial request to the apiserver, version: v1.31.0
	2024/08/19 13:09:49 Generating JWE encryption key
	2024/08/19 13:09:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/19 13:09:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/19 13:09:49 Initializing JWE encryption key from synchronized object
	2024/08/19 13:09:49 Creating in-cluster Sidecar client
	2024/08/19 13:09:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:09:49 Serving insecurely on HTTP port: 9090
	2024/08/19 13:10:19 Successful request to sidecar
	2024/08/19 13:09:49 Starting overwatch
	
	
	==> storage-provisioner [03bf383b35387b18a152fb19d88a32b9df77c72c242e727b726a6485f5942a73] <==
	I0819 13:07:24.439533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:07:24.454103       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:07:24.454154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:07:24.463124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:07:24.464266       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-893834_d074cb32-7fce-46e1-952b-b71e6b34cb67!
	I0819 13:07:24.472049       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a03f477d-23a2-47ef-a2d3-559890c55f33", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-893834_d074cb32-7fce-46e1-952b-b71e6b34cb67 became leader
	I0819 13:07:24.564681       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-893834_d074cb32-7fce-46e1-952b-b71e6b34cb67!
	
	
	==> storage-provisioner [70fabff5a7fe6e7d608aa6a0ac87140e43ce882924bcc17fe39476a2f7119b46] <==
	I0819 13:08:16.518748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:08:16.533405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:08:16.533457       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0819 13:08:22.801119       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0819 13:08:27.059415       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0819 13:08:37.026388       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:08:37.027005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a03f477d-23a2-47ef-a2d3-559890c55f33", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-893834_582bcbfd-e0c6-4767-9482-777630bb0da2 became leader
	I0819 13:08:37.027295       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-893834_582bcbfd-e0c6-4767-9482-777630bb0da2!
	I0819 13:08:37.130131       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-893834_582bcbfd-e0c6-4767-9482-777630bb0da2!
	I0819 13:09:09.372254       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0819 13:09:09.372377       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    220ff772-6b87-4ea7-809f-b33b8d774646 391 0 2024-08-19 13:07:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-19 13:07:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-76778699-d948-421a-849c-3138d3b40c2d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  76778699-d948-421a-849c-3138d3b40c2d 646 0 2024-08-19 13:09:09 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-19 13:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-19 13:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0819 13:09:09.372896       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-76778699-d948-421a-849c-3138d3b40c2d" provisioned
	I0819 13:09:09.373008       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0819 13:09:09.373060       1 volume_store.go:212] Trying to save persistentvolume "pvc-76778699-d948-421a-849c-3138d3b40c2d"
	I0819 13:09:09.378488       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"76778699-d948-421a-849c-3138d3b40c2d", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0819 13:09:09.423477       1 volume_store.go:219] persistentvolume "pvc-76778699-d948-421a-849c-3138d3b40c2d" saved
	I0819 13:09:09.424968       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"76778699-d948-421a-849c-3138d3b40c2d", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-76778699-d948-421a-849c-3138d3b40c2d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-893834 -n functional-893834
helpers_test.go:261: (dbg) Run:  kubectl --context functional-893834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-893834 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-893834 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-893834/192.168.49.2
	Start Time:       Mon, 19 Aug 2024 13:09:34 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://5dc947c3ea3b3908a7fb48c19dfa585f1d968134efde7fc57060049aeb83f70a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 19 Aug 2024 13:09:35 +0000
	      Finished:     Mon, 19 Aug 2024 13:09:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7vbpg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7vbpg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m38s  default-scheduler  Successfully assigned default/busybox-mount to functional-893834
	  Normal  Pulling    2m38s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m37s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 952ms (952ms including waiting). Image size: 1935750 bytes.
	  Normal  Created    2m37s  kubelet            Created container mount-munger
	  Normal  Started    2m37s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-893834/192.168.49.2
	Start Time:       Mon, 19 Aug 2024 13:09:09 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsjhn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-tsjhn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-893834
	  Normal   Pulling    100s (x4 over 3m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     99s (x4 over 3m2s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     99s (x4 over 3m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     74s (x6 over 3m2s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    60s (x7 over 3m2s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (382.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-914579 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 13:47:06.531049 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-914579 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.228747s)

                                                
                                                
-- stdout --
	* [old-k8s-version-914579] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-914579" primary control-plane node in "old-k8s-version-914579" cluster
	* Pulling base image v0.0.44-1723740748-19452 ...
	* Restarting existing docker container for "old-k8s-version-914579" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-914579 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:46:40.892354  152452 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:46:40.892545  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:46:40.892583  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:46:40.892603  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:46:40.892927  152452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:46:40.897483  152452 out.go:352] Setting JSON to false
	I0819 13:46:40.898485  152452 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":98945,"bootTime":1723976256,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:46:40.898596  152452 start.go:139] virtualization:  
	I0819 13:46:40.901699  152452 out.go:177] * [old-k8s-version-914579] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 13:46:40.906515  152452 notify.go:220] Checking for updates...
	I0819 13:46:40.911367  152452 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:46:40.913975  152452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:46:40.916850  152452 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:46:40.919594  152452 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:46:40.922283  152452 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:46:40.925056  152452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:46:40.928187  152452 config.go:182] Loaded profile config "old-k8s-version-914579": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 13:46:40.931412  152452 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 13:46:40.933919  152452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:46:40.997210  152452 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:46:40.997343  152452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:46:41.093936  152452 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:42 SystemTime:2024-08-19 13:46:41.082506201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:46:41.094072  152452 docker.go:307] overlay module found
	I0819 13:46:41.096965  152452 out.go:177] * Using the docker driver based on existing profile
	I0819 13:46:41.099485  152452 start.go:297] selected driver: docker
	I0819 13:46:41.099503  152452 start.go:901] validating driver "docker" against &{Name:old-k8s-version-914579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914579 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:46:41.099618  152452 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:46:41.100290  152452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:46:41.188961  152452 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:50 SystemTime:2024-08-19 13:46:41.175022168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:46:41.189313  152452 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:46:41.189341  152452 cni.go:84] Creating CNI manager for ""
	I0819 13:46:41.189358  152452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:46:41.189407  152452 start.go:340] cluster config:
	{Name:old-k8s-version-914579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:46:41.192744  152452 out.go:177] * Starting "old-k8s-version-914579" primary control-plane node in "old-k8s-version-914579" cluster
	I0819 13:46:41.195625  152452 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 13:46:41.198415  152452 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 13:46:41.201303  152452 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 13:46:41.201269  152452 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 13:46:41.201392  152452 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 13:46:41.201405  152452 cache.go:56] Caching tarball of preloaded images
	I0819 13:46:41.201500  152452 preload.go:172] Found /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 13:46:41.201508  152452 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0819 13:46:41.201658  152452 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/config.json ...
	W0819 13:46:41.221918  152452 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 13:46:41.221937  152452 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 13:46:41.222008  152452 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 13:46:41.222026  152452 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 13:46:41.222030  152452 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 13:46:41.222046  152452 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 13:46:41.222052  152452 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 13:46:41.355829  152452 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 13:46:41.355883  152452 cache.go:194] Successfully downloaded all kic artifacts
	I0819 13:46:41.355923  152452 start.go:360] acquireMachinesLock for old-k8s-version-914579: {Name:mk367928ac150aa926e220ad9d9371371d785260 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:46:41.355992  152452 start.go:364] duration metric: took 40.82µs to acquireMachinesLock for "old-k8s-version-914579"
	I0819 13:46:41.356132  152452 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:46:41.356147  152452 fix.go:54] fixHost starting: 
	I0819 13:46:41.357427  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:41.379198  152452 fix.go:112] recreateIfNeeded on old-k8s-version-914579: state=Stopped err=<nil>
	W0819 13:46:41.379227  152452 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:46:41.384660  152452 out.go:177] * Restarting existing docker container for "old-k8s-version-914579" ...
	I0819 13:46:41.387312  152452 cli_runner.go:164] Run: docker start old-k8s-version-914579
	I0819 13:46:41.875097  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:41.968483  152452 kic.go:430] container "old-k8s-version-914579" state is running.
	I0819 13:46:41.968927  152452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-914579
	I0819 13:46:41.999862  152452 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/config.json ...
	I0819 13:46:42.002467  152452 machine.go:93] provisionDockerMachine start ...
	I0819 13:46:42.002584  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:42.060099  152452 main.go:141] libmachine: Using SSH client type: native
	I0819 13:46:42.060376  152452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38550 <nil> <nil>}
	I0819 13:46:42.060384  152452 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:46:42.064799  152452 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44140->127.0.0.1:38550: read: connection reset by peer
	I0819 13:46:45.378720  152452 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-914579
	
	I0819 13:46:45.378903  152452 ubuntu.go:169] provisioning hostname "old-k8s-version-914579"
	I0819 13:46:45.379278  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:45.413595  152452 main.go:141] libmachine: Using SSH client type: native
	I0819 13:46:45.413849  152452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38550 <nil> <nil>}
	I0819 13:46:45.413872  152452 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-914579 && echo "old-k8s-version-914579" | sudo tee /etc/hostname
	I0819 13:46:45.576557  152452 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-914579
	
	I0819 13:46:45.576639  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:45.599738  152452 main.go:141] libmachine: Using SSH client type: native
	I0819 13:46:45.600106  152452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38550 <nil> <nil>}
	I0819 13:46:45.600128  152452 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-914579' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-914579/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-914579' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:46:45.744241  152452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:46:45.744268  152452 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19479-4141166/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-4141166/.minikube}
	I0819 13:46:45.744294  152452 ubuntu.go:177] setting up certificates
	I0819 13:46:45.744305  152452 provision.go:84] configureAuth start
	I0819 13:46:45.744377  152452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-914579
	I0819 13:46:45.761678  152452 provision.go:143] copyHostCerts
	I0819 13:46:45.761755  152452 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem, removing ...
	I0819 13:46:45.761771  152452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem
	I0819 13:46:45.761830  152452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem (1082 bytes)
	I0819 13:46:45.761936  152452 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem, removing ...
	I0819 13:46:45.761947  152452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem
	I0819 13:46:45.761971  152452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem (1123 bytes)
	I0819 13:46:45.762065  152452 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem, removing ...
	I0819 13:46:45.762076  152452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem
	I0819 13:46:45.762102  152452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem (1675 bytes)
	I0819 13:46:45.762161  152452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-914579 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-914579]
	I0819 13:46:46.230887  152452 provision.go:177] copyRemoteCerts
	I0819 13:46:46.235000  152452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:46:46.235106  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:46.278176  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:46.376807  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 13:46:46.404044  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 13:46:46.433649  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 13:46:46.461816  152452 provision.go:87] duration metric: took 717.483848ms to configureAuth
	I0819 13:46:46.461893  152452 ubuntu.go:193] setting minikube options for container-runtime
	I0819 13:46:46.462169  152452 config.go:182] Loaded profile config "old-k8s-version-914579": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 13:46:46.462205  152452 machine.go:96] duration metric: took 4.459704503s to provisionDockerMachine
	I0819 13:46:46.462236  152452 start.go:293] postStartSetup for "old-k8s-version-914579" (driver="docker")
	I0819 13:46:46.462264  152452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:46:46.462356  152452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:46:46.462432  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:46.479864  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:46.582609  152452 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:46:46.586818  152452 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 13:46:46.586856  152452 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 13:46:46.586871  152452 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 13:46:46.586886  152452 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 13:46:46.586900  152452 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/addons for local assets ...
	I0819 13:46:46.586952  152452 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/files for local assets ...
	I0819 13:46:46.587101  152452 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem -> 41465472.pem in /etc/ssl/certs
	I0819 13:46:46.587232  152452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:46:46.597655  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem --> /etc/ssl/certs/41465472.pem (1708 bytes)
	I0819 13:46:46.627824  152452 start.go:296] duration metric: took 165.556155ms for postStartSetup
	I0819 13:46:46.627909  152452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:46:46.627969  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:46.649373  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:46.740951  152452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 13:46:46.747773  152452 fix.go:56] duration metric: took 5.391618114s for fixHost
	I0819 13:46:46.747813  152452 start.go:83] releasing machines lock for "old-k8s-version-914579", held for 5.391807051s
	I0819 13:46:46.747888  152452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-914579
	I0819 13:46:46.775858  152452 ssh_runner.go:195] Run: cat /version.json
	I0819 13:46:46.775921  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:46.776151  152452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:46:46.776219  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:46.810149  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:46.811153  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:47.055240  152452 ssh_runner.go:195] Run: systemctl --version
	I0819 13:46:47.060619  152452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 13:46:47.065474  152452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 13:46:47.099399  152452 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 13:46:47.099532  152452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:46:47.110067  152452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 13:46:47.110133  152452 start.go:495] detecting cgroup driver to use...
	I0819 13:46:47.110180  152452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 13:46:47.110255  152452 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 13:46:47.126048  152452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 13:46:47.140183  152452 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:46:47.140290  152452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:46:47.155158  152452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:46:47.170766  152452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:46:47.282731  152452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:46:47.415767  152452 docker.go:233] disabling docker service ...
	I0819 13:46:47.415880  152452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:46:47.434898  152452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:46:47.447804  152452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:46:47.562347  152452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:46:47.693462  152452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:46:47.706299  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:46:47.724093  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0819 13:46:47.733962  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 13:46:47.745107  152452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 13:46:47.745225  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 13:46:47.755613  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 13:46:47.766871  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 13:46:47.783658  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 13:46:47.799205  152452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:46:47.811985  152452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 13:46:47.823085  152452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:46:47.833806  152452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:46:47.843068  152452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:46:47.931388  152452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 13:46:48.110290  152452 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 13:46:48.110403  152452 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 13:46:48.115593  152452 start.go:563] Will wait 60s for crictl version
	I0819 13:46:48.115673  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:46:48.121294  152452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:46:48.163159  152452 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 13:46:48.163235  152452 ssh_runner.go:195] Run: containerd --version
	I0819 13:46:48.191654  152452 ssh_runner.go:195] Run: containerd --version
	I0819 13:46:48.218563  152452 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0819 13:46:48.221287  152452 cli_runner.go:164] Run: docker network inspect old-k8s-version-914579 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 13:46:48.236066  152452 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0819 13:46:48.240194  152452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:46:48.254378  152452 kubeadm.go:883] updating cluster {Name:old-k8s-version-914579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914579 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:46:48.254503  152452 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 13:46:48.254567  152452 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:46:48.308539  152452 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 13:46:48.308562  152452 containerd.go:534] Images already preloaded, skipping extraction
	I0819 13:46:48.308623  152452 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:46:48.375472  152452 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 13:46:48.375495  152452 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:46:48.375504  152452 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0819 13:46:48.375630  152452 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-914579 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914579 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:46:48.375701  152452 ssh_runner.go:195] Run: sudo crictl info
	I0819 13:46:48.454002  152452 cni.go:84] Creating CNI manager for ""
	I0819 13:46:48.454107  152452 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:46:48.454141  152452 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:46:48.454197  152452 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-914579 NodeName:old-k8s-version-914579 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 13:46:48.454358  152452 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-914579"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:46:48.454491  152452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 13:46:48.465163  152452 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:46:48.465229  152452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:46:48.477707  152452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0819 13:46:48.500922  152452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:46:48.524948  152452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0819 13:46:48.548898  152452 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0819 13:46:48.552779  152452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:46:48.568053  152452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:46:48.704117  152452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:46:48.725172  152452 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579 for IP: 192.168.76.2
	I0819 13:46:48.725245  152452 certs.go:194] generating shared ca certs ...
	I0819 13:46:48.725279  152452 certs.go:226] acquiring lock for ca certs: {Name:mkb3362db9c120e28de14409a94f066387768cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:46:48.725489  152452 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key
	I0819 13:46:48.725562  152452 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key
	I0819 13:46:48.725601  152452 certs.go:256] generating profile certs ...
	I0819 13:46:48.725735  152452 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.key
	I0819 13:46:48.725850  152452 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/apiserver.key.263f292e
	I0819 13:46:48.725922  152452 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/proxy-client.key
	I0819 13:46:48.726090  152452 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/4146547.pem (1338 bytes)
	W0819 13:46:48.726148  152452 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/4146547_empty.pem, impossibly tiny 0 bytes
	I0819 13:46:48.726174  152452 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:46:48.726232  152452 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem (1082 bytes)
	I0819 13:46:48.726292  152452 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:46:48.726355  152452 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem (1675 bytes)
	I0819 13:46:48.726430  152452 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem (1708 bytes)
	I0819 13:46:48.727180  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:46:48.783217  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:46:48.813898  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:46:48.856331  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 13:46:48.886609  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 13:46:48.914915  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:46:48.964774  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:46:49.035648  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:46:49.082769  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:46:49.119835  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/4146547.pem --> /usr/share/ca-certificates/4146547.pem (1338 bytes)
	I0819 13:46:49.172345  152452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem --> /usr/share/ca-certificates/41465472.pem (1708 bytes)
	I0819 13:46:49.214561  152452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:46:49.242644  152452 ssh_runner.go:195] Run: openssl version
	I0819 13:46:49.250866  152452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:46:49.264124  152452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:46:49.268858  152452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:46:49.268938  152452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:46:49.281042  152452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:46:49.294313  152452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4146547.pem && ln -fs /usr/share/ca-certificates/4146547.pem /etc/ssl/certs/4146547.pem"
	I0819 13:46:49.307134  152452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4146547.pem
	I0819 13:46:49.313606  152452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 13:06 /usr/share/ca-certificates/4146547.pem
	I0819 13:46:49.313700  152452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4146547.pem
	I0819 13:46:49.322756  152452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4146547.pem /etc/ssl/certs/51391683.0"
	I0819 13:46:49.337035  152452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41465472.pem && ln -fs /usr/share/ca-certificates/41465472.pem /etc/ssl/certs/41465472.pem"
	I0819 13:46:49.347749  152452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41465472.pem
	I0819 13:46:49.356436  152452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 13:06 /usr/share/ca-certificates/41465472.pem
	I0819 13:46:49.356516  152452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41465472.pem
	I0819 13:46:49.367928  152452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41465472.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:46:49.380946  152452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:46:49.385173  152452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:46:49.397839  152452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:46:49.408730  152452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:46:49.418613  152452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:46:49.426194  152452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:46:49.435203  152452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:46:49.445901  152452 kubeadm.go:392] StartCluster: {Name:old-k8s-version-914579 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-914579 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:46:49.446040  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 13:46:49.446125  152452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:46:49.513627  152452 cri.go:89] found id: "8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:46:49.513706  152452 cri.go:89] found id: "765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:46:49.513725  152452 cri.go:89] found id: "93226e5eab87c1b6743b4914b71dcda693967856ed7ea4dba4f9cd99c76340ac"
	I0819 13:46:49.513748  152452 cri.go:89] found id: "d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:46:49.513781  152452 cri.go:89] found id: "ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:46:49.513806  152452 cri.go:89] found id: "c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:46:49.513828  152452 cri.go:89] found id: "fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:46:49.513853  152452 cri.go:89] found id: "6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:46:49.513886  152452 cri.go:89] found id: ""
	I0819 13:46:49.513967  152452 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0819 13:46:49.531615  152452 cri.go:116] JSON = null
	W0819 13:46:49.531721  152452 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0819 13:46:49.531931  152452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:46:49.542453  152452 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:46:49.542519  152452 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:46:49.542605  152452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:46:49.552038  152452 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:46:49.552622  152452 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-914579" does not appear in /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:46:49.552801  152452 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-4141166/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-914579" cluster setting kubeconfig missing "old-k8s-version-914579" context setting]
	I0819 13:46:49.553174  152452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/kubeconfig: {Name:mk7b0eea2060f71726f692d0256a33fdf7565e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:46:49.554746  152452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:46:49.569391  152452 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0819 13:46:49.569471  152452 kubeadm.go:597] duration metric: took 26.93265ms to restartPrimaryControlPlane
	I0819 13:46:49.569497  152452 kubeadm.go:394] duration metric: took 123.607695ms to StartCluster
	I0819 13:46:49.569541  152452 settings.go:142] acquiring lock: {Name:mkaa4019b166703efd95aaa3737397f414197f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:46:49.569627  152452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:46:49.570312  152452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/kubeconfig: {Name:mk7b0eea2060f71726f692d0256a33fdf7565e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:46:49.570622  152452 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 13:46:49.570989  152452 config.go:182] Loaded profile config "old-k8s-version-914579": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 13:46:49.571147  152452 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:46:49.571323  152452 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-914579"
	I0819 13:46:49.571554  152452 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-914579"
	W0819 13:46:49.571600  152452 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:46:49.571558  152452 addons.go:69] Setting dashboard=true in profile "old-k8s-version-914579"
	I0819 13:46:49.571662  152452 addons.go:234] Setting addon dashboard=true in "old-k8s-version-914579"
	W0819 13:46:49.571674  152452 addons.go:243] addon dashboard should already be in state true
	I0819 13:46:49.571698  152452 host.go:66] Checking if "old-k8s-version-914579" exists ...
	I0819 13:46:49.571708  152452 host.go:66] Checking if "old-k8s-version-914579" exists ...
	I0819 13:46:49.572291  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:49.572532  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:49.571574  152452 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-914579"
	I0819 13:46:49.572857  152452 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-914579"
	W0819 13:46:49.572908  152452 addons.go:243] addon metrics-server should already be in state true
	I0819 13:46:49.572939  152452 host.go:66] Checking if "old-k8s-version-914579" exists ...
	I0819 13:46:49.573377  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:49.571569  152452 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-914579"
	I0819 13:46:49.573942  152452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-914579"
	I0819 13:46:49.574253  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:49.583745  152452 out.go:177] * Verifying Kubernetes components...
	I0819 13:46:49.586822  152452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:46:49.630859  152452 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0819 13:46:49.637847  152452 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0819 13:46:49.640535  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0819 13:46:49.640565  152452 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0819 13:46:49.640651  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:49.661987  152452 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:46:49.662315  152452 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-914579"
	W0819 13:46:49.662329  152452 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:46:49.662355  152452 host.go:66] Checking if "old-k8s-version-914579" exists ...
	I0819 13:46:49.662781  152452 cli_runner.go:164] Run: docker container inspect old-k8s-version-914579 --format={{.State.Status}}
	I0819 13:46:49.667873  152452 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:46:49.667903  152452 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:46:49.667984  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:49.687921  152452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:46:49.690685  152452 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:46:49.690715  152452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:46:49.690805  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:49.704657  152452 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:46:49.704680  152452 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:46:49.704742  152452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-914579
	I0819 13:46:49.759968  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:49.761250  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:49.783569  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:49.784326  152452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38550 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/old-k8s-version-914579/id_rsa Username:docker}
	I0819 13:46:49.864751  152452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:46:49.928269  152452 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-914579" to be "Ready" ...
	I0819 13:46:49.976451  152452 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:46:49.976628  152452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:46:50.030842  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:46:50.049612  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:46:50.060847  152452 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:46:50.060924  152452 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:46:50.081563  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0819 13:46:50.081644  152452 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0819 13:46:50.147968  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0819 13:46:50.148046  152452 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0819 13:46:50.215879  152452 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:46:50.215954  152452 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:46:50.263086  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0819 13:46:50.263165  152452 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0819 13:46:50.361252  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0819 13:46:50.361347  152452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0819 13:46:50.386496  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:46:50.522223  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0819 13:46:50.522245  152452 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0819 13:46:50.542563  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:50.542595  152452 retry.go:31] will retry after 305.889128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:50.542629  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:50.542638  152452 retry.go:31] will retry after 193.605989ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:50.594991  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0819 13:46:50.595019  152452 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0819 13:46:50.690735  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:50.690764  152452 retry.go:31] will retry after 359.345218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:50.698981  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0819 13:46:50.699006  152452 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0819 13:46:50.725448  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0819 13:46:50.725470  152452 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0819 13:46:50.736812  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:46:50.759892  152452 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 13:46:50.759914  152452 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0819 13:46:50.819824  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 13:46:50.848989  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 13:46:51.028571  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.028602  152452 retry.go:31] will retry after 410.691713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.050890  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 13:46:51.173982  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.174015  152452 retry.go:31] will retry after 344.425642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:51.223765  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.223843  152452 retry.go:31] will retry after 193.145682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:51.367934  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.367967  152452 retry.go:31] will retry after 227.843924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.417173  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:46:51.439912  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:46:51.519415  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 13:46:51.596505  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 13:46:51.703592  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.703623  152452 retry.go:31] will retry after 681.284661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:51.703656  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.703664  152452 retry.go:31] will retry after 645.182918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:51.820838  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.820873  152452 retry.go:31] will retry after 193.798349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:51.820921  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.820933  152452 retry.go:31] will retry after 636.752008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:51.929600  152452 node_ready.go:53] error getting node "old-k8s-version-914579": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-914579": dial tcp 192.168.76.2:8443: connect: connection refused
	I0819 13:46:52.014930  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 13:46:52.120444  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.120497  152452 retry.go:31] will retry after 640.146917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.349917  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:46:52.385404  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:46:52.458426  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 13:46:52.521928  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.521989  152452 retry.go:31] will retry after 1.007422691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:52.531994  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.532026  152452 retry.go:31] will retry after 1.080401041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:52.611283  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.611313  152452 retry.go:31] will retry after 984.017631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.761687  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 13:46:52.848922  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:52.848951  152452 retry.go:31] will retry after 807.826263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:53.530586  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:46:53.595903  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:46:53.613172  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 13:46:53.649424  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:53.649459  152452 retry.go:31] will retry after 957.402629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:53.657783  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 13:46:53.823990  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:53.824026  152452 retry.go:31] will retry after 714.256418ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:53.841304  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:53.841345  152452 retry.go:31] will retry after 1.585998141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:53.841386  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:53.841399  152452 retry.go:31] will retry after 1.643077205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:54.429665  152452 node_ready.go:53] error getting node "old-k8s-version-914579": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-914579": dial tcp 192.168.76.2:8443: connect: connection refused
	I0819 13:46:54.538828  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:46:54.607265  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 13:46:54.671435  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:54.671480  152452 retry.go:31] will retry after 2.449668237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:54.728759  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:54.728813  152452 retry.go:31] will retry after 1.776198385s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:55.427991  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:46:55.485556  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 13:46:55.519744  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:55.519859  152452 retry.go:31] will retry after 1.166136875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 13:46:55.614837  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:55.614912  152452 retry.go:31] will retry after 1.810440222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:56.506131  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 13:46:56.615493  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:56.615574  152452 retry.go:31] will retry after 2.393632434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:56.686915  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 13:46:56.792175  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:56.792255  152452 retry.go:31] will retry after 1.591780893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:56.929796  152452 node_ready.go:53] error getting node "old-k8s-version-914579": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-914579": dial tcp 192.168.76.2:8443: connect: connection refused
	I0819 13:46:57.122077  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 13:46:57.221046  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:57.221127  152452 retry.go:31] will retry after 3.519218405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:57.426391  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 13:46:57.548336  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:57.548365  152452 retry.go:31] will retry after 3.067290425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:58.384306  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 13:46:58.681702  152452 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:58.681748  152452 retry.go:31] will retry after 2.346030581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 13:46:59.009844  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:47:00.616809  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 13:47:00.741113  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:47:01.028760  152452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:47:08.675291  152452 node_ready.go:49] node "old-k8s-version-914579" has status "Ready":"True"
	I0819 13:47:08.675315  152452 node_ready.go:38] duration metric: took 18.746964094s for node "old-k8s-version-914579" to be "Ready" ...
	I0819 13:47:08.675325  152452 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:47:08.900971  152452 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-8qwdj" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:09.270141  152452 pod_ready.go:93] pod "coredns-74ff55c5b-8qwdj" in "kube-system" namespace has status "Ready":"True"
	I0819 13:47:09.270167  152452 pod_ready.go:82] duration metric: took 369.119832ms for pod "coredns-74ff55c5b-8qwdj" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:09.270179  152452 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:09.345675  152452 pod_ready.go:93] pod "etcd-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"True"
	I0819 13:47:09.345745  152452 pod_ready.go:82] duration metric: took 75.557542ms for pod "etcd-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:09.345776  152452 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:09.409874  152452 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"True"
	I0819 13:47:09.409940  152452 pod_ready.go:82] duration metric: took 64.142073ms for pod "kube-apiserver-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:09.409967  152452 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:47:11.241034  152452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.231145825s)
	I0819 13:47:11.428491  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:11.610086  152452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.993214369s)
	I0819 13:47:11.610356  152452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.581569563s)
	I0819 13:47:11.610549  152452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.869181403s)
	I0819 13:47:11.610582  152452 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-914579"
	I0819 13:47:11.612939  152452 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-914579 addons enable metrics-server
	
	I0819 13:47:11.629306  152452 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0819 13:47:11.632221  152452 addons.go:510] duration metric: took 22.06106473s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0819 13:47:13.917263  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:15.917297  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:17.917947  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:19.918630  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:22.416228  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:24.417067  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:26.418088  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:28.917346  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:31.418645  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:33.949103  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:36.419506  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:38.472532  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:40.917380  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:42.917553  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:45.420432  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:47.916245  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:49.917474  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:52.417057  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:54.917315  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:57.416720  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:47:59.426078  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:01.918821  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:04.416130  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:06.416672  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:08.416877  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:10.919567  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:13.416852  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:15.916725  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:17.916776  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:20.420472  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:22.918816  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:24.916247  152452 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:24.916273  152452 pod_ready.go:82] duration metric: took 1m15.506252507s for pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:24.916287  152452 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h74p7" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:24.921504  152452 pod_ready.go:93] pod "kube-proxy-h74p7" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:24.921529  152452 pod_ready.go:82] duration metric: took 5.235022ms for pod "kube-proxy-h74p7" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:24.921541  152452 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:26.932620  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:29.428314  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:31.928244  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:33.928780  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:34.928259  152452 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:34.928291  152452 pod_ready.go:82] duration metric: took 10.006741707s for pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:34.928304  152452 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.934720  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:38.962296  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:41.433908  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:43.435418  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:45.934634  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:48.434821  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:50.934608  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:52.934845  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:55.433832  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:57.434466  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:59.434579  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:01.934503  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:03.934901  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:06.435134  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:08.934952  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:11.434322  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:13.434874  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:15.934454  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:18.435150  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:20.935216  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:23.434770  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:25.434829  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:27.934477  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:29.934921  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:32.434702  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:34.936027  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:37.434753  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:39.492846  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:41.934580  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:43.935230  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:45.935307  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:48.435447  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:50.935387  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:53.434796  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:55.935376  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:58.434485  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:00.466805  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:02.935170  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:04.935322  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:07.434833  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:09.936204  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:12.434526  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:14.935092  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:16.935611  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:19.433957  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:21.438984  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:23.934582  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:25.935760  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:28.434454  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:30.436079  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:32.934427  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:34.934763  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:36.935055  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:39.435039  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:41.435936  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:43.934346  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:45.935139  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:47.935651  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:50.434768  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:52.435137  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:54.934726  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:57.434128  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:59.935544  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:01.936231  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:04.434264  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:06.441299  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:08.934048  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:10.934468  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:12.934719  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:15.434024  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:17.438817  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:19.935273  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:21.936911  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:24.434460  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:26.935401  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:29.434401  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:31.435212  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:33.935129  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:36.434032  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:38.434748  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:40.935437  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:43.434596  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:45.435727  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:47.934998  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:49.935071  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:51.935315  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:54.434496  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:56.434591  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:58.434895  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:00.492651  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:02.934662  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:05.434898  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:07.934929  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:09.935287  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:12.434870  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:14.934271  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:16.935007  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:18.937904  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:21.434619  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:23.434906  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:25.934891  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:28.435242  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:30.935637  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:33.439917  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:34.935755  152452 pod_ready.go:82] duration metric: took 4m0.007436285s for pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace to be "Ready" ...
	E0819 13:52:34.935829  152452 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:52:34.935840  152452 pod_ready.go:39] duration metric: took 5m26.260504475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:52:34.935859  152452 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:52:34.935897  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:34.935964  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:34.983562  152452 cri.go:89] found id: "b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:34.983594  152452 cri.go:89] found id: "fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:34.983600  152452 cri.go:89] found id: ""
	I0819 13:52:34.983607  152452 logs.go:276] 2 containers: [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef]
	I0819 13:52:34.983673  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:34.987405  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:34.991212  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:34.991322  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:35.038233  152452 cri.go:89] found id: "a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:35.038256  152452 cri.go:89] found id: "6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:35.038261  152452 cri.go:89] found id: ""
	I0819 13:52:35.038269  152452 logs.go:276] 2 containers: [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d]
	I0819 13:52:35.038330  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.042886  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.047062  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:35.047139  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:35.088211  152452 cri.go:89] found id: "2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:35.088252  152452 cri.go:89] found id: "8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:35.088258  152452 cri.go:89] found id: ""
	I0819 13:52:35.088268  152452 logs.go:276] 2 containers: [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5]
	I0819 13:52:35.088394  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.092837  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.097676  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:35.097870  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:35.144630  152452 cri.go:89] found id: "db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:35.144652  152452 cri.go:89] found id: "c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:35.144657  152452 cri.go:89] found id: ""
	I0819 13:52:35.144665  152452 logs.go:276] 2 containers: [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7]
	I0819 13:52:35.144726  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.148679  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.152190  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:35.152273  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:35.196608  152452 cri.go:89] found id: "024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:35.196690  152452 cri.go:89] found id: "d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:35.196704  152452 cri.go:89] found id: ""
	I0819 13:52:35.196713  152452 logs.go:276] 2 containers: [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28]
	I0819 13:52:35.196773  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.201290  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.204963  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:35.205039  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:35.249246  152452 cri.go:89] found id: "fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:35.249266  152452 cri.go:89] found id: "ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:35.249271  152452 cri.go:89] found id: ""
	I0819 13:52:35.249279  152452 logs.go:276] 2 containers: [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e]
	I0819 13:52:35.249349  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.254569  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.258220  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:35.258299  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:35.300607  152452 cri.go:89] found id: "c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:35.300633  152452 cri.go:89] found id: "765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:35.300638  152452 cri.go:89] found id: ""
	I0819 13:52:35.300646  152452 logs.go:276] 2 containers: [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9]
	I0819 13:52:35.300704  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.304359  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.307967  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:35.308109  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:35.359383  152452 cri.go:89] found id: "65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:35.359409  152452 cri.go:89] found id: "13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:35.359414  152452 cri.go:89] found id: ""
	I0819 13:52:35.359422  152452 logs.go:276] 2 containers: [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa]
	I0819 13:52:35.359535  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.363613  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.367561  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:35.367641  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:35.406825  152452 cri.go:89] found id: "15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:35.406849  152452 cri.go:89] found id: ""
	I0819 13:52:35.406857  152452 logs.go:276] 1 containers: [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75]
	I0819 13:52:35.406914  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.410940  152452 logs.go:123] Gathering logs for storage-provisioner [13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa] ...
	I0819 13:52:35.410966  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:35.465154  152452 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:35.465182  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:35.532935  152452 logs.go:123] Gathering logs for kube-apiserver [fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef] ...
	I0819 13:52:35.532976  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:35.591317  152452 logs.go:123] Gathering logs for coredns [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d] ...
	I0819 13:52:35.591395  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:35.650002  152452 logs.go:123] Gathering logs for kube-scheduler [c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7] ...
	I0819 13:52:35.650029  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:35.700068  152452 logs.go:123] Gathering logs for kube-proxy [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303] ...
	I0819 13:52:35.700098  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:35.746365  152452 logs.go:123] Gathering logs for kindnet [765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9] ...
	I0819 13:52:35.746394  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:35.798143  152452 logs.go:123] Gathering logs for kubernetes-dashboard [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75] ...
	I0819 13:52:35.798174  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:35.839938  152452 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:35.839967  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 13:52:35.898330  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505154     660 reflector.go:138] object-"kube-system"/"coredns-token-mgkqs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mgkqs" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.898580  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505275     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vrkdd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vrkdd" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.898804  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505337     660 reflector.go:138] object-"kube-system"/"metrics-server-token-gngrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gngrg" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899032  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505397     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899252  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505458     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-gvnrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gvnrc" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899472  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505515     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899698  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505568     660 reflector.go:138] object-"kube-system"/"kindnet-token-db6v8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-db6v8" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899941  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505619     660 reflector.go:138] object-"default"/"default-token-ldqq4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldqq4" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.907521  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.700916     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.907711  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.923497     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.910569  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:23 old-k8s-version-914579 kubelet[660]: E0819 13:47:23.556884     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.912575  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:38 old-k8s-version-914579 kubelet[660]: E0819 13:47:38.547027     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.913031  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:41 old-k8s-version-914579 kubelet[660]: E0819 13:47:41.081116     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.913488  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:42 old-k8s-version-914579 kubelet[660]: E0819 13:47:42.088354     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.913930  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:43 old-k8s-version-914579 kubelet[660]: E0819 13:47:43.093385     660 pod_workers.go:191] Error syncing pod e088dd49-745a-4473-b25c-b8b1bdef35d2 ("storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"
	W0819 13:52:35.914587  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.143364     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.917029  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.539102     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.917347  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:04 old-k8s-version-914579 kubelet[660]: E0819 13:48:04.530531     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.917951  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:06 old-k8s-version-914579 kubelet[660]: E0819 13:48:06.204897     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.918275  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:10 old-k8s-version-914579 kubelet[660]: E0819 13:48:10.144306     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.918461  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:17 old-k8s-version-914579 kubelet[660]: E0819 13:48:17.533996     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.918787  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:22 old-k8s-version-914579 kubelet[660]: E0819 13:48:22.534681     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.918970  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:28 old-k8s-version-914579 kubelet[660]: E0819 13:48:28.530752     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.919435  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:35 old-k8s-version-914579 kubelet[660]: E0819 13:48:35.289738     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.919895  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.144100     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.922351  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.550260     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.922678  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:53 old-k8s-version-914579 kubelet[660]: E0819 13:48:53.530760     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.922862  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:54 old-k8s-version-914579 kubelet[660]: E0819 13:48:54.530590     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.923186  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:05 old-k8s-version-914579 kubelet[660]: E0819 13:49:05.530108     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.923368  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:06 old-k8s-version-914579 kubelet[660]: E0819 13:49:06.531043     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.923551  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:19 old-k8s-version-914579 kubelet[660]: E0819 13:49:19.533642     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.924153  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:21 old-k8s-version-914579 kubelet[660]: E0819 13:49:21.435392     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.924483  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:30 old-k8s-version-914579 kubelet[660]: E0819 13:49:30.144187     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.924667  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:32 old-k8s-version-914579 kubelet[660]: E0819 13:49:32.530403     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.924993  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:41 old-k8s-version-914579 kubelet[660]: E0819 13:49:41.530796     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.925176  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:44 old-k8s-version-914579 kubelet[660]: E0819 13:49:44.530365     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.925501  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:55 old-k8s-version-914579 kubelet[660]: E0819 13:49:55.530750     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.925688  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:57 old-k8s-version-914579 kubelet[660]: E0819 13:49:57.534986     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.926015  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:07 old-k8s-version-914579 kubelet[660]: E0819 13:50:07.530719     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.928455  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:11 old-k8s-version-914579 kubelet[660]: E0819 13:50:11.538819     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.928791  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:22 old-k8s-version-914579 kubelet[660]: E0819 13:50:22.530086     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.928974  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:25 old-k8s-version-914579 kubelet[660]: E0819 13:50:25.531673     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.929326  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:35 old-k8s-version-914579 kubelet[660]: E0819 13:50:35.530288     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.929510  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:38 old-k8s-version-914579 kubelet[660]: E0819 13:50:38.536408     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.929836  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.531928     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.930292  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.676624     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.930620  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:00 old-k8s-version-914579 kubelet[660]: E0819 13:51:00.175960     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.930808  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:05 old-k8s-version-914579 kubelet[660]: E0819 13:51:05.532058     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.931133  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:13 old-k8s-version-914579 kubelet[660]: E0819 13:51:13.534019     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.931316  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:18 old-k8s-version-914579 kubelet[660]: E0819 13:51:18.530498     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.931826  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:25 old-k8s-version-914579 kubelet[660]: E0819 13:51:25.530598     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.932029  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:31 old-k8s-version-914579 kubelet[660]: E0819 13:51:31.530699     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.932414  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:36 old-k8s-version-914579 kubelet[660]: E0819 13:51:36.530142     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.932607  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:42 old-k8s-version-914579 kubelet[660]: E0819 13:51:42.530485     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.932933  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:48 old-k8s-version-914579 kubelet[660]: E0819 13:51:48.530149     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.933116  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:54 old-k8s-version-914579 kubelet[660]: E0819 13:51:54.530585     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.933493  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:02 old-k8s-version-914579 kubelet[660]: E0819 13:52:02.531227     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.933689  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.934037  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.934225  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.934560  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.934746  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:35.934760  152452 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:35.934778  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:36.093366  152452 logs.go:123] Gathering logs for etcd [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d] ...
	I0819 13:52:36.093398  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:36.140421  152452 logs.go:123] Gathering logs for kube-controller-manager [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe] ...
	I0819 13:52:36.140454  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:36.207906  152452 logs.go:123] Gathering logs for kube-apiserver [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234] ...
	I0819 13:52:36.207941  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:36.281081  152452 logs.go:123] Gathering logs for etcd [6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d] ...
	I0819 13:52:36.281117  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:36.371336  152452 logs.go:123] Gathering logs for container status ...
	I0819 13:52:36.371370  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:36.423185  152452 logs.go:123] Gathering logs for kube-controller-manager [ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e] ...
	I0819 13:52:36.423217  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:36.477465  152452 logs.go:123] Gathering logs for kindnet [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4] ...
	I0819 13:52:36.477500  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:36.551026  152452 logs.go:123] Gathering logs for storage-provisioner [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b] ...
	I0819 13:52:36.551090  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:36.605781  152452 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:36.605810  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:36.625534  152452 logs.go:123] Gathering logs for coredns [8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5] ...
	I0819 13:52:36.625564  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:36.665220  152452 logs.go:123] Gathering logs for kube-scheduler [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c] ...
	I0819 13:52:36.665250  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:36.706249  152452 logs.go:123] Gathering logs for kube-proxy [d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28] ...
	I0819 13:52:36.706279  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:36.751297  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:36.751324  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 13:52:36.751378  152452 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 13:52:36.751394  152452 out.go:270]   Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:36.751404  152452 out.go:270]   Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	  Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:36.751419  152452 out.go:270]   Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:36.751425  152452 out.go:270]   Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	  Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:36.751434  152452 out.go:270]   Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:36.751439  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:36.751445  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:52:46.752339  152452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:52:46.770079  152452 api_server.go:72] duration metric: took 5m57.199385315s to wait for apiserver process to appear ...
	I0819 13:52:46.770108  152452 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:52:46.770164  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:46.770245  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:46.856422  152452 cri.go:89] found id: "b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:46.856452  152452 cri.go:89] found id: "fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:46.856457  152452 cri.go:89] found id: ""
	I0819 13:52:46.856466  152452 logs.go:276] 2 containers: [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef]
	I0819 13:52:46.856537  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.860669  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.865075  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:46.865149  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:46.936606  152452 cri.go:89] found id: "a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:46.936634  152452 cri.go:89] found id: "6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:46.936640  152452 cri.go:89] found id: ""
	I0819 13:52:46.936648  152452 logs.go:276] 2 containers: [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d]
	I0819 13:52:46.936720  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.942994  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.948231  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:46.948310  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:47.020145  152452 cri.go:89] found id: "2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:47.020172  152452 cri.go:89] found id: "8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:47.020177  152452 cri.go:89] found id: ""
	I0819 13:52:47.020188  152452 logs.go:276] 2 containers: [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5]
	I0819 13:52:47.020279  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.024967  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.034264  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:47.034360  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:47.096009  152452 cri.go:89] found id: "db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:47.096031  152452 cri.go:89] found id: "c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:47.096036  152452 cri.go:89] found id: ""
	I0819 13:52:47.096043  152452 logs.go:276] 2 containers: [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7]
	I0819 13:52:47.096111  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.102562  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.109207  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:47.109283  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:47.184806  152452 cri.go:89] found id: "024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:47.184828  152452 cri.go:89] found id: "d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:47.184833  152452 cri.go:89] found id: ""
	I0819 13:52:47.184842  152452 logs.go:276] 2 containers: [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28]
	I0819 13:52:47.184903  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.189384  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.194364  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:47.194448  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:47.265188  152452 cri.go:89] found id: "fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:47.265218  152452 cri.go:89] found id: "ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:47.265224  152452 cri.go:89] found id: ""
	I0819 13:52:47.265231  152452 logs.go:276] 2 containers: [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e]
	I0819 13:52:47.265303  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.269862  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.273980  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:47.274061  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:47.338484  152452 cri.go:89] found id: "c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:47.338509  152452 cri.go:89] found id: "765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:47.338515  152452 cri.go:89] found id: ""
	I0819 13:52:47.338522  152452 logs.go:276] 2 containers: [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9]
	I0819 13:52:47.338580  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.343202  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.370668  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:47.370745  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:47.432108  152452 cri.go:89] found id: "65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:47.432132  152452 cri.go:89] found id: "13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:47.432137  152452 cri.go:89] found id: ""
	I0819 13:52:47.432160  152452 logs.go:276] 2 containers: [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa]
	I0819 13:52:47.432231  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.438213  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.446636  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:47.446722  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:47.510037  152452 cri.go:89] found id: "15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:47.510060  152452 cri.go:89] found id: ""
	I0819 13:52:47.510069  152452 logs.go:276] 1 containers: [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75]
	I0819 13:52:47.510144  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.515774  152452 logs.go:123] Gathering logs for etcd [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d] ...
	I0819 13:52:47.515814  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:47.636906  152452 logs.go:123] Gathering logs for storage-provisioner [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b] ...
	I0819 13:52:47.636943  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:47.704121  152452 logs.go:123] Gathering logs for kubernetes-dashboard [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75] ...
	I0819 13:52:47.704152  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:47.805660  152452 logs.go:123] Gathering logs for container status ...
	I0819 13:52:47.805689  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:47.889024  152452 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:47.889055  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 13:52:47.950679  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505154     660 reflector.go:138] object-"kube-system"/"coredns-token-mgkqs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mgkqs" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.950954  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505275     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vrkdd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vrkdd" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951176  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505337     660 reflector.go:138] object-"kube-system"/"metrics-server-token-gngrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gngrg" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951381  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505397     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951596  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505458     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-gvnrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gvnrc" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951821  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505515     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.952031  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505568     660 reflector.go:138] object-"kube-system"/"kindnet-token-db6v8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-db6v8" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.952238  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505619     660 reflector.go:138] object-"default"/"default-token-ldqq4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldqq4" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.959838  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.700916     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.960028  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.923497     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.962801  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:23 old-k8s-version-914579 kubelet[660]: E0819 13:47:23.556884     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.964806  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:38 old-k8s-version-914579 kubelet[660]: E0819 13:47:38.547027     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.965259  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:41 old-k8s-version-914579 kubelet[660]: E0819 13:47:41.081116     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.965718  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:42 old-k8s-version-914579 kubelet[660]: E0819 13:47:42.088354     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.966157  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:43 old-k8s-version-914579 kubelet[660]: E0819 13:47:43.093385     660 pod_workers.go:191] Error syncing pod e088dd49-745a-4473-b25c-b8b1bdef35d2 ("storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"
	W0819 13:52:47.966810  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.143364     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.969243  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.539102     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.969559  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:04 old-k8s-version-914579 kubelet[660]: E0819 13:48:04.530531     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.970145  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:06 old-k8s-version-914579 kubelet[660]: E0819 13:48:06.204897     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.970473  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:10 old-k8s-version-914579 kubelet[660]: E0819 13:48:10.144306     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.970654  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:17 old-k8s-version-914579 kubelet[660]: E0819 13:48:17.533996     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.970983  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:22 old-k8s-version-914579 kubelet[660]: E0819 13:48:22.534681     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.971164  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:28 old-k8s-version-914579 kubelet[660]: E0819 13:48:28.530752     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.971618  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:35 old-k8s-version-914579 kubelet[660]: E0819 13:48:35.289738     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.972083  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.144100     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.974502  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.550260     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.974827  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:53 old-k8s-version-914579 kubelet[660]: E0819 13:48:53.530760     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.975011  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:54 old-k8s-version-914579 kubelet[660]: E0819 13:48:54.530590     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.975335  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:05 old-k8s-version-914579 kubelet[660]: E0819 13:49:05.530108     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.975517  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:06 old-k8s-version-914579 kubelet[660]: E0819 13:49:06.531043     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.975699  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:19 old-k8s-version-914579 kubelet[660]: E0819 13:49:19.533642     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.976287  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:21 old-k8s-version-914579 kubelet[660]: E0819 13:49:21.435392     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.976615  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:30 old-k8s-version-914579 kubelet[660]: E0819 13:49:30.144187     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.976800  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:32 old-k8s-version-914579 kubelet[660]: E0819 13:49:32.530403     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.977129  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:41 old-k8s-version-914579 kubelet[660]: E0819 13:49:41.530796     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.977314  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:44 old-k8s-version-914579 kubelet[660]: E0819 13:49:44.530365     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.977643  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:55 old-k8s-version-914579 kubelet[660]: E0819 13:49:55.530750     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.977826  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:57 old-k8s-version-914579 kubelet[660]: E0819 13:49:57.534986     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.978150  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:07 old-k8s-version-914579 kubelet[660]: E0819 13:50:07.530719     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.980581  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:11 old-k8s-version-914579 kubelet[660]: E0819 13:50:11.538819     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.980908  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:22 old-k8s-version-914579 kubelet[660]: E0819 13:50:22.530086     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.981091  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:25 old-k8s-version-914579 kubelet[660]: E0819 13:50:25.531673     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.981423  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:35 old-k8s-version-914579 kubelet[660]: E0819 13:50:35.530288     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.981610  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:38 old-k8s-version-914579 kubelet[660]: E0819 13:50:38.536408     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.981975  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.531928     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.982431  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.676624     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.982758  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:00 old-k8s-version-914579 kubelet[660]: E0819 13:51:00.175960     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.982940  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:05 old-k8s-version-914579 kubelet[660]: E0819 13:51:05.532058     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.983266  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:13 old-k8s-version-914579 kubelet[660]: E0819 13:51:13.534019     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.983448  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:18 old-k8s-version-914579 kubelet[660]: E0819 13:51:18.530498     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.983773  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:25 old-k8s-version-914579 kubelet[660]: E0819 13:51:25.530598     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.983961  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:31 old-k8s-version-914579 kubelet[660]: E0819 13:51:31.530699     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.984287  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:36 old-k8s-version-914579 kubelet[660]: E0819 13:51:36.530142     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.984471  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:42 old-k8s-version-914579 kubelet[660]: E0819 13:51:42.530485     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.984797  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:48 old-k8s-version-914579 kubelet[660]: E0819 13:51:48.530149     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.984991  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:54 old-k8s-version-914579 kubelet[660]: E0819 13:51:54.530585     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.985325  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:02 old-k8s-version-914579 kubelet[660]: E0819 13:52:02.531227     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.985511  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.985850  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.986033  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.986358  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.986541  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.986866  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532025     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.987049  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532731     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:47.987058  152452 logs.go:123] Gathering logs for kube-apiserver [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234] ...
	I0819 13:52:47.987074  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:48.072238  152452 logs.go:123] Gathering logs for kube-apiserver [fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef] ...
	I0819 13:52:48.072277  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:48.144742  152452 logs.go:123] Gathering logs for coredns [8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5] ...
	I0819 13:52:48.144795  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:48.193011  152452 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:48.193038  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:48.210157  152452 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:48.210198  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:48.374447  152452 logs.go:123] Gathering logs for kube-controller-manager [ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e] ...
	I0819 13:52:48.374478  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:48.431668  152452 logs.go:123] Gathering logs for kindnet [765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9] ...
	I0819 13:52:48.431701  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:48.487205  152452 logs.go:123] Gathering logs for storage-provisioner [13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa] ...
	I0819 13:52:48.487242  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:48.547481  152452 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:48.547514  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:48.610774  152452 logs.go:123] Gathering logs for etcd [6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d] ...
	I0819 13:52:48.610815  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:48.676770  152452 logs.go:123] Gathering logs for coredns [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d] ...
	I0819 13:52:48.676811  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:48.719917  152452 logs.go:123] Gathering logs for kube-scheduler [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c] ...
	I0819 13:52:48.719995  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:48.764294  152452 logs.go:123] Gathering logs for kube-scheduler [c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7] ...
	I0819 13:52:48.764367  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:48.807362  152452 logs.go:123] Gathering logs for kube-proxy [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303] ...
	I0819 13:52:48.807437  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:48.846905  152452 logs.go:123] Gathering logs for kube-proxy [d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28] ...
	I0819 13:52:48.846935  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:48.887930  152452 logs.go:123] Gathering logs for kube-controller-manager [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe] ...
	I0819 13:52:48.887962  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:48.948162  152452 logs.go:123] Gathering logs for kindnet [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4] ...
	I0819 13:52:48.948198  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:49.017994  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:49.018028  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 13:52:49.018115  152452 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 13:52:49.018145  152452 out.go:270]   Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:49.018161  152452 out.go:270]   Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	  Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:49.018169  152452 out.go:270]   Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:49.018195  152452 out.go:270]   Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532025     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	  Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532025     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:49.018204  152452 out.go:270]   Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532731     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532731     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:49.018210  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:49.018223  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:52:59.019588  152452 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0819 13:52:59.031741  152452 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0819 13:52:59.034833  152452 out.go:201] 
	W0819 13:52:59.037405  152452 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0819 13:52:59.037441  152452 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0819 13:52:59.037463  152452 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0819 13:52:59.037469  152452 out.go:270] * 
	* 
	W0819 13:52:59.038452  152452 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:52:59.042159  152452 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-914579 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-914579
helpers_test.go:235: (dbg) docker inspect old-k8s-version-914579:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d362767646f5d274b702c77f9b75f9f2a1eac2d616eedcbae4b14767a077f35",
	        "Created": "2024-08-19T13:44:10.431331676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 152836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T13:46:41.573104142Z",
	            "FinishedAt": "2024-08-19T13:46:39.998561489Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/7d362767646f5d274b702c77f9b75f9f2a1eac2d616eedcbae4b14767a077f35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d362767646f5d274b702c77f9b75f9f2a1eac2d616eedcbae4b14767a077f35/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d362767646f5d274b702c77f9b75f9f2a1eac2d616eedcbae4b14767a077f35/hosts",
	        "LogPath": "/var/lib/docker/containers/7d362767646f5d274b702c77f9b75f9f2a1eac2d616eedcbae4b14767a077f35/7d362767646f5d274b702c77f9b75f9f2a1eac2d616eedcbae4b14767a077f35-json.log",
	        "Name": "/old-k8s-version-914579",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-914579:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-914579",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eb924d6f85b97306f54c59f563c7cbd0a4cabec52b2f0e15ecdb35c30a7db0b5-init/diff:/var/lib/docker/overlay2/f9730c920ad297aa3b42f5a0ebbe1c9311721ca848f3268205322d3e26bf32e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb924d6f85b97306f54c59f563c7cbd0a4cabec52b2f0e15ecdb35c30a7db0b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb924d6f85b97306f54c59f563c7cbd0a4cabec52b2f0e15ecdb35c30a7db0b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb924d6f85b97306f54c59f563c7cbd0a4cabec52b2f0e15ecdb35c30a7db0b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-914579",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-914579/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-914579",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-914579",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-914579",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e60bc8038f4507ff4b84637abcdb15c0e165cd87b0023be5c0cbf695d4d54798",
	            "SandboxKey": "/var/run/docker/netns/e60bc8038f45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38550"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38551"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38554"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38552"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38553"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-914579": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "76c7dc0c8baf84f4be3d9b423a96ee8317b7226b8748fb037d5e3e8cb8f1ad54",
	                    "EndpointID": "ae4cbd2172b61a45705ec2e1808903fb71218c3976906196649c69660f2b19ac",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-914579",
	                        "7d362767646f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-914579 -n old-k8s-version-914579
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-914579 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-914579 logs -n 25: (2.61701181s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-386048 sudo                                  | cilium-386048            | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC |                     |
	|         | containerd config dump                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-386048 sudo                                  | cilium-386048            | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |         |                     |                     |
	|         | --full --no-pager                                      |                          |         |         |                     |                     |
	| ssh     | -p cilium-386048 sudo                                  | cilium-386048            | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-386048 sudo find                             | cilium-386048            | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |         |                     |                     |
	| ssh     | -p cilium-386048 sudo crio                             | cilium-386048            | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC |                     |
	|         | config                                                 |                          |         |         |                     |                     |
	| delete  | -p cilium-386048                                       | cilium-386048            | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC | 19 Aug 24 13:42 UTC |
	| start   | -p cert-expiration-072717                              | cert-expiration-072717   | jenkins | v1.33.1 | 19 Aug 24 13:42 UTC | 19 Aug 24 13:43 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-896390                               | force-systemd-env-896390 | jenkins | v1.33.1 | 19 Aug 24 13:43 UTC | 19 Aug 24 13:43 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-896390                            | force-systemd-env-896390 | jenkins | v1.33.1 | 19 Aug 24 13:43 UTC | 19 Aug 24 13:43 UTC |
	| start   | -p cert-options-854184                                 | cert-options-854184      | jenkins | v1.33.1 | 19 Aug 24 13:43 UTC | 19 Aug 24 13:43 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-854184 ssh                                | cert-options-854184      | jenkins | v1.33.1 | 19 Aug 24 13:43 UTC | 19 Aug 24 13:44 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-854184 -- sudo                         | cert-options-854184      | jenkins | v1.33.1 | 19 Aug 24 13:44 UTC | 19 Aug 24 13:44 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-854184                                 | cert-options-854184      | jenkins | v1.33.1 | 19 Aug 24 13:44 UTC | 19 Aug 24 13:44 UTC |
	| start   | -p old-k8s-version-914579                              | old-k8s-version-914579   | jenkins | v1.33.1 | 19 Aug 24 13:44 UTC | 19 Aug 24 13:46 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-914579        | old-k8s-version-914579   | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC | 19 Aug 24 13:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-914579                              | old-k8s-version-914579   | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC | 19 Aug 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p cert-expiration-072717                              | cert-expiration-072717   | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC | 19 Aug 24 13:46 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-072717                              | cert-expiration-072717   | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC | 19 Aug 24 13:46 UTC |
	| addons  | enable dashboard -p old-k8s-version-914579             | old-k8s-version-914579   | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC | 19 Aug 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-895877                                   | no-preload-895877        | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC | 19 Aug 24 13:48 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-914579                              | old-k8s-version-914579   | jenkins | v1.33.1 | 19 Aug 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-895877             | no-preload-895877        | jenkins | v1.33.1 | 19 Aug 24 13:48 UTC | 19 Aug 24 13:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-895877                                   | no-preload-895877        | jenkins | v1.33.1 | 19 Aug 24 13:48 UTC | 19 Aug 24 13:48 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-895877                  | no-preload-895877        | jenkins | v1.33.1 | 19 Aug 24 13:48 UTC | 19 Aug 24 13:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-895877                                   | no-preload-895877        | jenkins | v1.33.1 | 19 Aug 24 13:48 UTC | 19 Aug 24 13:52 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 13:48:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 13:48:23.057348  159859 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:48:23.057584  159859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:48:23.057618  159859 out.go:358] Setting ErrFile to fd 2...
	I0819 13:48:23.057640  159859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:48:23.057945  159859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:48:23.058381  159859 out.go:352] Setting JSON to false
	I0819 13:48:23.059586  159859 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":99047,"bootTime":1723976256,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:48:23.059696  159859 start.go:139] virtualization:  
	I0819 13:48:23.062109  159859 out.go:177] * [no-preload-895877] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 13:48:23.063429  159859 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:48:23.063513  159859 notify.go:220] Checking for updates...
	I0819 13:48:23.066319  159859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:48:23.067872  159859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:48:23.069284  159859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:48:23.070622  159859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:48:23.071864  159859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:48:23.073587  159859 config.go:182] Loaded profile config "no-preload-895877": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:48:23.074130  159859 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:48:23.103892  159859 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:48:23.104002  159859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:48:23.167753  159859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 13:48:23.15802675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:48:23.167988  159859 docker.go:307] overlay module found
	I0819 13:48:23.169996  159859 out.go:177] * Using the docker driver based on existing profile
	I0819 13:48:23.171499  159859 start.go:297] selected driver: docker
	I0819 13:48:23.171522  159859 start.go:901] validating driver "docker" against &{Name:no-preload-895877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-895877 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:48:23.171661  159859 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:48:23.172333  159859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:48:23.225990  159859 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 13:48:23.21598955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:48:23.226351  159859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:48:23.226383  159859 cni.go:84] Creating CNI manager for ""
	I0819 13:48:23.226391  159859 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:48:23.226430  159859 start.go:340] cluster config:
	{Name:no-preload-895877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-895877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:48:23.227744  159859 out.go:177] * Starting "no-preload-895877" primary control-plane node in "no-preload-895877" cluster
	I0819 13:48:23.228905  159859 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 13:48:23.230371  159859 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 13:48:23.231747  159859 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 13:48:23.231858  159859 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 13:48:23.232231  159859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/config.json ...
	I0819 13:48:23.232562  159859 cache.go:107] acquiring lock: {Name:mk0435f120e3615424228517dffddda9f2d1a462 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.232650  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 13:48:23.232664  159859 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.601µs
	I0819 13:48:23.232678  159859 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 13:48:23.232692  159859 cache.go:107] acquiring lock: {Name:mkb0412fa022c0128c30eeeb14fb1092141293cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.232727  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 13:48:23.232738  159859 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 47.433µs
	I0819 13:48:23.232744  159859 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 13:48:23.232759  159859 cache.go:107] acquiring lock: {Name:mk4794cdc2eba693f0c8a75ff1419dbc17713015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.232792  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 13:48:23.232802  159859 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 44.184µs
	I0819 13:48:23.232809  159859 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 13:48:23.232822  159859 cache.go:107] acquiring lock: {Name:mk56937c24f200dd7dd76a6b43d70e549c5b8446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.232918  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 13:48:23.232931  159859 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 109.71µs
	I0819 13:48:23.232937  159859 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 13:48:23.232958  159859 cache.go:107] acquiring lock: {Name:mk4a09ee05b7a465c20582b2aa1b983c03eb25b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.232997  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 13:48:23.233008  159859 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 51.806µs
	I0819 13:48:23.233015  159859 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 13:48:23.233033  159859 cache.go:107] acquiring lock: {Name:mk70a897e2f7919b44821e23fc981b384d592661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.233110  159859 cache.go:107] acquiring lock: {Name:mk8dcb3139b47d733691caf4bc0167a8e9e61ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.233139  159859 cache.go:107] acquiring lock: {Name:mkcc80ee107b53bd7a80a54cf140eff50ca3f763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.233229  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 13:48:23.233253  159859 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 114.304µs
	I0819 13:48:23.233275  159859 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 13:48:23.233285  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 13:48:23.233323  159859 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 214.349µs
	I0819 13:48:23.233343  159859 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 13:48:23.233261  159859 cache.go:115] /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 13:48:23.233387  159859 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 354.655µs
	I0819 13:48:23.233463  159859 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 13:48:23.233522  159859 cache.go:87] Successfully saved all images to host disk.
	W0819 13:48:23.260105  159859 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 13:48:23.260129  159859 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 13:48:23.260212  159859 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 13:48:23.260237  159859 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 13:48:23.260245  159859 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 13:48:23.260253  159859 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 13:48:23.260259  159859 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 13:48:23.383144  159859 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 13:48:23.383200  159859 cache.go:194] Successfully downloaded all kic artifacts
	I0819 13:48:23.383238  159859 start.go:360] acquireMachinesLock for no-preload-895877: {Name:mk35f055cc39e8adadc670a96ac80dee273bd1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 13:48:23.383316  159859 start.go:364] duration metric: took 54.268µs to acquireMachinesLock for "no-preload-895877"
	I0819 13:48:23.383343  159859 start.go:96] Skipping create...Using existing machine configuration
	I0819 13:48:23.383354  159859 fix.go:54] fixHost starting: 
	I0819 13:48:23.383646  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:23.400858  159859 fix.go:112] recreateIfNeeded on no-preload-895877: state=Stopped err=<nil>
	W0819 13:48:23.400898  159859 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 13:48:23.403896  159859 out.go:177] * Restarting existing docker container for "no-preload-895877" ...
	I0819 13:48:22.918816  152452 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:24.916247  152452 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:24.916273  152452 pod_ready.go:82] duration metric: took 1m15.506252507s for pod "kube-controller-manager-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:24.916287  152452 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h74p7" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:24.921504  152452 pod_ready.go:93] pod "kube-proxy-h74p7" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:24.921529  152452 pod_ready.go:82] duration metric: took 5.235022ms for pod "kube-proxy-h74p7" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:24.921541  152452 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:23.406294  159859 cli_runner.go:164] Run: docker start no-preload-895877
	I0819 13:48:23.747950  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:23.783056  159859 kic.go:430] container "no-preload-895877" state is running.
	I0819 13:48:23.785117  159859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-895877
	I0819 13:48:23.816532  159859 profile.go:143] Saving config to /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/config.json ...
	I0819 13:48:23.816880  159859 machine.go:93] provisionDockerMachine start ...
	I0819 13:48:23.816989  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:23.847559  159859 main.go:141] libmachine: Using SSH client type: native
	I0819 13:48:23.847865  159859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38560 <nil> <nil>}
	I0819 13:48:23.847876  159859 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 13:48:23.849133  159859 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51670->127.0.0.1:38560: read: connection reset by peer
	I0819 13:48:26.987702  159859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-895877
	
	I0819 13:48:26.987728  159859 ubuntu.go:169] provisioning hostname "no-preload-895877"
	I0819 13:48:26.987884  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:27.011662  159859 main.go:141] libmachine: Using SSH client type: native
	I0819 13:48:27.011981  159859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38560 <nil> <nil>}
	I0819 13:48:27.012001  159859 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-895877 && echo "no-preload-895877" | sudo tee /etc/hostname
	I0819 13:48:27.161893  159859 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-895877
	
	I0819 13:48:27.161993  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:27.184819  159859 main.go:141] libmachine: Using SSH client type: native
	I0819 13:48:27.185077  159859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38560 <nil> <nil>}
	I0819 13:48:27.185102  159859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-895877' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-895877/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-895877' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 13:48:27.320264  159859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 13:48:27.320292  159859 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19479-4141166/.minikube CaCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19479-4141166/.minikube}
	I0819 13:48:27.320326  159859 ubuntu.go:177] setting up certificates
	I0819 13:48:27.320336  159859 provision.go:84] configureAuth start
	I0819 13:48:27.320406  159859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-895877
	I0819 13:48:27.338774  159859 provision.go:143] copyHostCerts
	I0819 13:48:27.338848  159859 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem, removing ...
	I0819 13:48:27.338864  159859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem
	I0819 13:48:27.338944  159859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.pem (1082 bytes)
	I0819 13:48:27.339112  159859 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem, removing ...
	I0819 13:48:27.339126  159859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem
	I0819 13:48:27.339160  159859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/cert.pem (1123 bytes)
	I0819 13:48:27.339227  159859 exec_runner.go:144] found /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem, removing ...
	I0819 13:48:27.339236  159859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem
	I0819 13:48:27.339261  159859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19479-4141166/.minikube/key.pem (1675 bytes)
	I0819 13:48:27.339316  159859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem org=jenkins.no-preload-895877 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-895877]
	I0819 13:48:28.251915  159859 provision.go:177] copyRemoteCerts
	I0819 13:48:28.251993  159859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 13:48:28.252052  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:28.277952  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:28.377778  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 13:48:28.405031  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 13:48:28.440875  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 13:48:28.466770  159859 provision.go:87] duration metric: took 1.146416294s to configureAuth
	I0819 13:48:28.466797  159859 ubuntu.go:193] setting minikube options for container-runtime
	I0819 13:48:28.466997  159859 config.go:182] Loaded profile config "no-preload-895877": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:48:28.467010  159859 machine.go:96] duration metric: took 4.650119053s to provisionDockerMachine
	I0819 13:48:28.467019  159859 start.go:293] postStartSetup for "no-preload-895877" (driver="docker")
	I0819 13:48:28.467034  159859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 13:48:28.467093  159859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 13:48:28.467136  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:28.486081  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:28.581396  159859 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 13:48:28.585114  159859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 13:48:28.585151  159859 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 13:48:28.585162  159859 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 13:48:28.585170  159859 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 13:48:28.585180  159859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/addons for local assets ...
	I0819 13:48:28.585241  159859 filesync.go:126] Scanning /home/jenkins/minikube-integration/19479-4141166/.minikube/files for local assets ...
	I0819 13:48:28.585325  159859 filesync.go:149] local asset: /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem -> 41465472.pem in /etc/ssl/certs
	I0819 13:48:28.585443  159859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 13:48:28.594870  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem --> /etc/ssl/certs/41465472.pem (1708 bytes)
	I0819 13:48:28.620940  159859 start.go:296] duration metric: took 153.899866ms for postStartSetup
	I0819 13:48:28.621080  159859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:48:28.621149  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:28.639991  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:28.729482  159859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 13:48:28.734389  159859 fix.go:56] duration metric: took 5.351026132s for fixHost
	I0819 13:48:28.734420  159859 start.go:83] releasing machines lock for "no-preload-895877", held for 5.351090804s
	I0819 13:48:28.734496  159859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-895877
	I0819 13:48:28.751955  159859 ssh_runner.go:195] Run: cat /version.json
	I0819 13:48:28.751983  159859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 13:48:28.752013  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:28.752053  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:28.770446  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:28.773397  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:29.013451  159859 ssh_runner.go:195] Run: systemctl --version
	I0819 13:48:29.018290  159859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 13:48:29.023207  159859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 13:48:29.046651  159859 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 13:48:29.046747  159859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 13:48:29.056736  159859 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 13:48:29.056803  159859 start.go:495] detecting cgroup driver to use...
	I0819 13:48:29.056844  159859 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 13:48:29.056902  159859 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 13:48:29.071640  159859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 13:48:29.084803  159859 docker.go:217] disabling cri-docker service (if available) ...
	I0819 13:48:29.084878  159859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 13:48:29.098236  159859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 13:48:29.110467  159859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 13:48:29.194791  159859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 13:48:29.290308  159859 docker.go:233] disabling docker service ...
	I0819 13:48:29.290377  159859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 13:48:29.306239  159859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 13:48:29.321712  159859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 13:48:29.421775  159859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 13:48:29.518912  159859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 13:48:29.537685  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 13:48:29.555752  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 13:48:29.567178  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 13:48:29.577940  159859 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 13:48:29.578060  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 13:48:29.588606  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 13:48:29.599578  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 13:48:29.610152  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 13:48:29.620814  159859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 13:48:29.630821  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 13:48:29.641816  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 13:48:29.651992  159859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 13:48:29.662490  159859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 13:48:29.671917  159859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 13:48:29.680568  159859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:48:29.770563  159859 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 13:48:29.945992  159859 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 13:48:29.946064  159859 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 13:48:29.949921  159859 start.go:563] Will wait 60s for crictl version
	I0819 13:48:29.949986  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:48:29.953488  159859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 13:48:29.995094  159859 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 13:48:29.995163  159859 ssh_runner.go:195] Run: containerd --version
	I0819 13:48:30.057384  159859 ssh_runner.go:195] Run: containerd --version
	I0819 13:48:30.097682  159859 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 13:48:26.932620  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:29.428314  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:30.100749  159859 cli_runner.go:164] Run: docker network inspect no-preload-895877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 13:48:30.119962  159859 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0819 13:48:30.124878  159859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:48:30.139723  159859 kubeadm.go:883] updating cluster {Name:no-preload-895877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-895877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 13:48:30.139936  159859 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 13:48:30.139995  159859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 13:48:30.184142  159859 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 13:48:30.184170  159859 cache_images.go:84] Images are preloaded, skipping loading
	I0819 13:48:30.184179  159859 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.0 containerd true true} ...
	I0819 13:48:30.184345  159859 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-895877 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-895877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 13:48:30.184424  159859 ssh_runner.go:195] Run: sudo crictl info
	I0819 13:48:30.233266  159859 cni.go:84] Creating CNI manager for ""
	I0819 13:48:30.233293  159859 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 13:48:30.233306  159859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 13:48:30.233363  159859 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-895877 NodeName:no-preload-895877 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 13:48:30.233518  159859 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-895877"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 13:48:30.233592  159859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 13:48:30.244384  159859 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 13:48:30.244461  159859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 13:48:30.253829  159859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0819 13:48:30.274896  159859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 13:48:30.294265  159859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0819 13:48:30.315410  159859 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0819 13:48:30.319072  159859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 13:48:30.331376  159859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:48:30.431565  159859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:48:30.447573  159859 certs.go:68] Setting up /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877 for IP: 192.168.85.2
	I0819 13:48:30.447642  159859 certs.go:194] generating shared ca certs ...
	I0819 13:48:30.447675  159859 certs.go:226] acquiring lock for ca certs: {Name:mkb3362db9c120e28de14409a94f066387768cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:48:30.447886  159859 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key
	I0819 13:48:30.447982  159859 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key
	I0819 13:48:30.448012  159859 certs.go:256] generating profile certs ...
	I0819 13:48:30.448143  159859 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.key
	I0819 13:48:30.448247  159859 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/apiserver.key.6d2bf100
	I0819 13:48:30.448328  159859 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/proxy-client.key
	I0819 13:48:30.448491  159859 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/4146547.pem (1338 bytes)
	W0819 13:48:30.448556  159859 certs.go:480] ignoring /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/4146547_empty.pem, impossibly tiny 0 bytes
	I0819 13:48:30.448582  159859 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 13:48:30.448640  159859 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/ca.pem (1082 bytes)
	I0819 13:48:30.448708  159859 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/cert.pem (1123 bytes)
	I0819 13:48:30.448773  159859 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/key.pem (1675 bytes)
	I0819 13:48:30.448864  159859 certs.go:484] found cert: /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem (1708 bytes)
	I0819 13:48:30.449782  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 13:48:30.482666  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 13:48:30.513124  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 13:48:30.561401  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 13:48:30.594308  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 13:48:30.626186  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 13:48:30.655405  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 13:48:30.681061  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 13:48:30.715833  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/certs/4146547.pem --> /usr/share/ca-certificates/4146547.pem (1338 bytes)
	I0819 13:48:30.755853  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/ssl/certs/41465472.pem --> /usr/share/ca-certificates/41465472.pem (1708 bytes)
	I0819 13:48:30.785092  159859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 13:48:30.814608  159859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 13:48:30.833983  159859 ssh_runner.go:195] Run: openssl version
	I0819 13:48:30.842737  159859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4146547.pem && ln -fs /usr/share/ca-certificates/4146547.pem /etc/ssl/certs/4146547.pem"
	I0819 13:48:30.853628  159859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4146547.pem
	I0819 13:48:30.857335  159859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 13:06 /usr/share/ca-certificates/4146547.pem
	I0819 13:48:30.857422  159859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4146547.pem
	I0819 13:48:30.865319  159859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4146547.pem /etc/ssl/certs/51391683.0"
	I0819 13:48:30.874681  159859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41465472.pem && ln -fs /usr/share/ca-certificates/41465472.pem /etc/ssl/certs/41465472.pem"
	I0819 13:48:30.884432  159859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41465472.pem
	I0819 13:48:30.888733  159859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 13:06 /usr/share/ca-certificates/41465472.pem
	I0819 13:48:30.888804  159859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41465472.pem
	I0819 13:48:30.897649  159859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41465472.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 13:48:30.906721  159859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 13:48:30.916450  159859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:48:30.920177  159859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:48:30.920283  159859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 13:48:30.928467  159859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 13:48:30.938078  159859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 13:48:30.941924  159859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 13:48:30.949058  159859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 13:48:30.956250  159859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 13:48:30.963360  159859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 13:48:30.970754  159859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 13:48:30.978177  159859 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 13:48:30.985988  159859 kubeadm.go:392] StartCluster: {Name:no-preload-895877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-895877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:48:30.986089  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 13:48:30.986171  159859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 13:48:31.043608  159859 cri.go:89] found id: "6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:48:31.043630  159859 cri.go:89] found id: "72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:48:31.043636  159859 cri.go:89] found id: "6b9abbc8ffb0d57218a397dad761cf2aff19c9771afaf4600ebc8c066ec59d5a"
	I0819 13:48:31.043650  159859 cri.go:89] found id: "d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	I0819 13:48:31.043654  159859 cri.go:89] found id: "ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:48:31.043658  159859 cri.go:89] found id: "c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:48:31.043661  159859 cri.go:89] found id: "7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:48:31.043685  159859 cri.go:89] found id: "7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:48:31.043694  159859 cri.go:89] found id: ""
	I0819 13:48:31.043767  159859 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0819 13:48:31.060844  159859 cri.go:116] JSON = null
	W0819 13:48:31.060919  159859 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0819 13:48:31.061041  159859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 13:48:31.074778  159859 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 13:48:31.074798  159859 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 13:48:31.074872  159859 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 13:48:31.086119  159859 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 13:48:31.086811  159859 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-895877" does not appear in /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:48:31.087135  159859 kubeconfig.go:62] /home/jenkins/minikube-integration/19479-4141166/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-895877" cluster setting kubeconfig missing "no-preload-895877" context setting]
	I0819 13:48:31.087678  159859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/kubeconfig: {Name:mk7b0eea2060f71726f692d0256a33fdf7565e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:48:31.089213  159859 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 13:48:31.104537  159859 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0819 13:48:31.104615  159859 kubeadm.go:597] duration metric: took 29.809734ms to restartPrimaryControlPlane
	I0819 13:48:31.104650  159859 kubeadm.go:394] duration metric: took 118.665377ms to StartCluster
	I0819 13:48:31.104681  159859 settings.go:142] acquiring lock: {Name:mkaa4019b166703efd95aaa3737397f414197f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:48:31.104771  159859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:48:31.105752  159859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19479-4141166/kubeconfig: {Name:mk7b0eea2060f71726f692d0256a33fdf7565e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 13:48:31.106035  159859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 13:48:31.106387  159859 config.go:182] Loaded profile config "no-preload-895877": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:48:31.106666  159859 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 13:48:31.106781  159859 addons.go:69] Setting storage-provisioner=true in profile "no-preload-895877"
	I0819 13:48:31.106806  159859 addons.go:234] Setting addon storage-provisioner=true in "no-preload-895877"
	W0819 13:48:31.106813  159859 addons.go:243] addon storage-provisioner should already be in state true
	I0819 13:48:31.106837  159859 host.go:66] Checking if "no-preload-895877" exists ...
	I0819 13:48:31.107287  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:31.107473  159859 addons.go:69] Setting default-storageclass=true in profile "no-preload-895877"
	I0819 13:48:31.107514  159859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-895877"
	I0819 13:48:31.107704  159859 addons.go:69] Setting metrics-server=true in profile "no-preload-895877"
	I0819 13:48:31.107738  159859 addons.go:234] Setting addon metrics-server=true in "no-preload-895877"
	W0819 13:48:31.107752  159859 addons.go:243] addon metrics-server should already be in state true
	I0819 13:48:31.107800  159859 host.go:66] Checking if "no-preload-895877" exists ...
	I0819 13:48:31.107906  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:31.108207  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:31.110728  159859 out.go:177] * Verifying Kubernetes components...
	I0819 13:48:31.110843  159859 addons.go:69] Setting dashboard=true in profile "no-preload-895877"
	I0819 13:48:31.110907  159859 addons.go:234] Setting addon dashboard=true in "no-preload-895877"
	W0819 13:48:31.110935  159859 addons.go:243] addon dashboard should already be in state true
	I0819 13:48:31.110984  159859 host.go:66] Checking if "no-preload-895877" exists ...
	I0819 13:48:31.111470  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:31.116236  159859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 13:48:31.160705  159859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 13:48:31.164046  159859 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:48:31.164074  159859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 13:48:31.164144  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:31.167098  159859 addons.go:234] Setting addon default-storageclass=true in "no-preload-895877"
	W0819 13:48:31.167120  159859 addons.go:243] addon default-storageclass should already be in state true
	I0819 13:48:31.167147  159859 host.go:66] Checking if "no-preload-895877" exists ...
	I0819 13:48:31.167578  159859 cli_runner.go:164] Run: docker container inspect no-preload-895877 --format={{.State.Status}}
	I0819 13:48:31.217117  159859 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 13:48:31.221158  159859 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 13:48:31.221186  159859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 13:48:31.221261  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:31.234983  159859 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0819 13:48:31.237698  159859 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0819 13:48:31.240510  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0819 13:48:31.240536  159859 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0819 13:48:31.240613  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:31.249998  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:31.255585  159859 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 13:48:31.255605  159859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 13:48:31.255667  159859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-895877
	I0819 13:48:31.293351  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:31.305403  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:31.311379  159859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38560 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/no-preload-895877/id_rsa Username:docker}
	I0819 13:48:31.365493  159859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 13:48:31.433554  159859 node_ready.go:35] waiting up to 6m0s for node "no-preload-895877" to be "Ready" ...
	I0819 13:48:31.511511  159859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:48:31.540279  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0819 13:48:31.540353  159859 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0819 13:48:31.612644  159859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:48:31.630754  159859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 13:48:31.630824  159859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 13:48:31.651896  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0819 13:48:31.651977  159859 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0819 13:48:31.731552  159859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 13:48:31.731629  159859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 13:48:31.765553  159859 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:48:31.765633  159859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 13:48:31.772876  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0819 13:48:31.772954  159859 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0819 13:48:31.819465  159859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 13:48:31.851050  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0819 13:48:31.851070  159859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0819 13:48:31.997474  159859 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 13:48:31.997562  159859 retry.go:31] will retry after 288.63713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0819 13:48:32.106678  159859 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 13:48:32.106759  159859 retry.go:31] will retry after 133.564512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 13:48:32.176528  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0819 13:48:32.176603  159859 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0819 13:48:32.241034  159859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 13:48:32.286791  159859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 13:48:32.322509  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0819 13:48:32.322535  159859 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0819 13:48:32.382486  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0819 13:48:32.382513  159859 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0819 13:48:32.546197  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0819 13:48:32.546225  159859 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0819 13:48:32.671774  159859 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 13:48:32.671846  159859 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0819 13:48:32.782065  159859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 13:48:31.928244  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:33.928780  152452 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:34.928259  152452 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:34.928291  152452 pod_ready.go:82] duration metric: took 10.006741707s for pod "kube-scheduler-old-k8s-version-914579" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:34.928304  152452 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.639325  159859 node_ready.go:49] node "no-preload-895877" has status "Ready":"True"
	I0819 13:48:36.639355  159859 node_ready.go:38] duration metric: took 5.205723105s for node "no-preload-895877" to be "Ready" ...
	I0819 13:48:36.639366  159859 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:48:36.759851  159859 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fgrp5" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.821739  159859 pod_ready.go:93] pod "coredns-6f6b679f8f-fgrp5" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:36.821769  159859 pod_ready.go:82] duration metric: took 61.88001ms for pod "coredns-6f6b679f8f-fgrp5" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.821782  159859 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.838273  159859 pod_ready.go:93] pod "etcd-no-preload-895877" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:36.838309  159859 pod_ready.go:82] duration metric: took 16.518883ms for pod "etcd-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.838324  159859 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.897225  159859 pod_ready.go:93] pod "kube-apiserver-no-preload-895877" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:36.897251  159859 pod_ready.go:82] duration metric: took 58.91993ms for pod "kube-apiserver-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.897265  159859 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.931093  159859 pod_ready.go:93] pod "kube-controller-manager-no-preload-895877" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:36.931119  159859 pod_ready.go:82] duration metric: took 33.846974ms for pod "kube-controller-manager-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.931133  159859 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q48v" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.990754  159859 pod_ready.go:93] pod "kube-proxy-9q48v" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:36.990789  159859 pod_ready.go:82] duration metric: took 59.648966ms for pod "kube-proxy-9q48v" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:36.990801  159859 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:37.245777  159859 pod_ready.go:93] pod "kube-scheduler-no-preload-895877" in "kube-system" namespace has status "Ready":"True"
	I0819 13:48:37.245805  159859 pod_ready.go:82] duration metric: took 254.996306ms for pod "kube-scheduler-no-preload-895877" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:37.245818  159859 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace to be "Ready" ...
	I0819 13:48:39.253176  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:40.292845  159859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.473338045s)
	I0819 13:48:40.292879  159859 addons.go:475] Verifying addon metrics-server=true in "no-preload-895877"
	I0819 13:48:40.292915  159859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.051807446s)
	I0819 13:48:40.414318  159859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.127443297s)
	I0819 13:48:40.538107  159859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.755984289s)
	I0819 13:48:40.540658  159859 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-895877 addons enable metrics-server
	
	I0819 13:48:40.543592  159859 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I0819 13:48:36.934720  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:38.962296  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:40.546196  159859 addons.go:510] duration metric: took 9.439531097s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I0819 13:48:41.253544  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:41.433908  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:43.435418  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:43.751681  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:45.751936  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:45.934634  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:48.434821  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:48.256450  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:50.751871  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:52.752231  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:50.934608  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:52.934845  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:55.433832  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:55.252312  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:57.751987  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:57.434466  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:59.434579  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:48:59.752098  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:01.754240  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:01.934503  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:03.934901  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:04.253030  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:06.253609  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:06.435134  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:08.934952  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:08.752272  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:10.752789  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:11.434322  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:13.434874  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:13.252783  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:15.756579  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:15.934454  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:18.435150  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:18.254223  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:20.752470  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:20.935216  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:23.434770  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:25.434829  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:23.252128  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:25.751944  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:27.752479  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:27.934477  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:29.934921  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:29.752613  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:32.251975  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:32.434702  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:34.936027  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:34.252591  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:36.252850  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:37.434753  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:39.492846  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:38.751694  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:40.752753  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:41.934580  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:43.935230  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:43.252825  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:45.264347  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:47.752029  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:45.935307  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:48.435447  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:50.251323  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:52.252894  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:50.935387  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:53.434796  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:54.253140  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:56.752233  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:55.935376  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:58.434485  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:00.466805  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:49:59.252022  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:01.256104  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:02.935170  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:04.935322  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:03.752302  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:06.253451  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:07.434833  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:09.936204  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:08.253573  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:10.752710  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:12.434526  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:14.935092  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:13.252889  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:15.753254  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:16.935611  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:19.433957  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:18.252325  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:20.753297  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:21.438984  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:23.934582  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:23.252841  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:25.752377  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:25.935760  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:28.434454  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:30.436079  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:28.252194  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:30.252743  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:32.752444  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:32.934427  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:34.934763  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:34.752631  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:37.252322  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:36.935055  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:39.435039  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:39.752027  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:41.759765  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:41.435936  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:43.934346  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:44.253068  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:46.752621  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:45.935139  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:47.935651  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:50.434768  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:49.251910  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:51.252221  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:52.435137  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:54.934726  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:53.252499  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:55.252709  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:57.751681  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:57.434128  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:59.935544  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:50:59.752200  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:01.752510  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:01.936231  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:04.434264  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:04.252618  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:06.751606  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:06.441299  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:08.934048  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:08.753239  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:11.253501  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:10.934468  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:12.934719  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:15.434024  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:13.752777  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:16.252992  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:17.438817  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:19.935273  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:18.254096  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:20.752336  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:21.936911  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:24.434460  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:23.252020  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:25.752107  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:27.752290  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:26.935401  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:29.434401  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:30.252267  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:32.756406  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:31.435212  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:33.935129  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:35.253151  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:37.752395  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:36.434032  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:38.434748  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:39.753032  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:42.255583  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:40.935437  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:43.434596  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:45.435727  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:44.752347  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:47.252795  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:47.934998  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:49.935071  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:49.751936  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:51.752716  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:51.935315  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:54.434496  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:54.251697  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:56.252610  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:56.434591  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:58.434895  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:00.492651  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:51:58.252826  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:00.361946  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:02.752354  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:02.934662  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:05.434898  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:05.252886  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:07.752220  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:07.934929  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:09.935287  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:10.252746  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:12.253297  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:12.434870  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:14.934271  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:14.752180  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:17.251910  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:16.935007  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:18.937904  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:19.752199  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:21.752631  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:21.434619  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:23.434906  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:24.252905  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:26.254234  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:25.934891  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:28.435242  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:28.752021  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:30.752630  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:32.754409  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:30.935637  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:33.439917  152452 pod_ready.go:103] pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:34.935755  152452 pod_ready.go:82] duration metric: took 4m0.007436285s for pod "metrics-server-9975d5f86-ncd6r" in "kube-system" namespace to be "Ready" ...
	E0819 13:52:34.935829  152452 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:52:34.935840  152452 pod_ready.go:39] duration metric: took 5m26.260504475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:52:34.935859  152452 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:52:34.935897  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:34.935964  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:34.983562  152452 cri.go:89] found id: "b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:34.983594  152452 cri.go:89] found id: "fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:34.983600  152452 cri.go:89] found id: ""
	I0819 13:52:34.983607  152452 logs.go:276] 2 containers: [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef]
	I0819 13:52:34.983673  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:34.987405  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:34.991212  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:34.991322  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:35.038233  152452 cri.go:89] found id: "a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:35.038256  152452 cri.go:89] found id: "6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:35.038261  152452 cri.go:89] found id: ""
	I0819 13:52:35.038269  152452 logs.go:276] 2 containers: [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d]
	I0819 13:52:35.038330  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.042886  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.047062  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:35.047139  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:35.088211  152452 cri.go:89] found id: "2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:35.088252  152452 cri.go:89] found id: "8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:35.088258  152452 cri.go:89] found id: ""
	I0819 13:52:35.088268  152452 logs.go:276] 2 containers: [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5]
	I0819 13:52:35.088394  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.092837  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.097676  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:35.097870  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:35.144630  152452 cri.go:89] found id: "db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:35.144652  152452 cri.go:89] found id: "c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:35.144657  152452 cri.go:89] found id: ""
	I0819 13:52:35.144665  152452 logs.go:276] 2 containers: [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7]
	I0819 13:52:35.144726  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.148679  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.152190  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:35.152273  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:35.196608  152452 cri.go:89] found id: "024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:35.196690  152452 cri.go:89] found id: "d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:35.196704  152452 cri.go:89] found id: ""
	I0819 13:52:35.196713  152452 logs.go:276] 2 containers: [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28]
	I0819 13:52:35.196773  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.201290  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.204963  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:35.205039  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:35.249246  152452 cri.go:89] found id: "fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:35.249266  152452 cri.go:89] found id: "ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:35.249271  152452 cri.go:89] found id: ""
	I0819 13:52:35.249279  152452 logs.go:276] 2 containers: [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e]
	I0819 13:52:35.249349  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.254569  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.258220  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:35.258299  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:35.300607  152452 cri.go:89] found id: "c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:35.300633  152452 cri.go:89] found id: "765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:35.300638  152452 cri.go:89] found id: ""
	I0819 13:52:35.300646  152452 logs.go:276] 2 containers: [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9]
	I0819 13:52:35.300704  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.304359  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.307967  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:35.308109  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:35.359383  152452 cri.go:89] found id: "65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:35.359409  152452 cri.go:89] found id: "13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:35.359414  152452 cri.go:89] found id: ""
	I0819 13:52:35.359422  152452 logs.go:276] 2 containers: [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa]
	I0819 13:52:35.359535  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.363613  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.367561  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:35.367641  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:35.406825  152452 cri.go:89] found id: "15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:35.406849  152452 cri.go:89] found id: ""
	I0819 13:52:35.406857  152452 logs.go:276] 1 containers: [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75]
	I0819 13:52:35.406914  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:35.410940  152452 logs.go:123] Gathering logs for storage-provisioner [13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa] ...
	I0819 13:52:35.410966  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:35.465154  152452 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:35.465182  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:35.532935  152452 logs.go:123] Gathering logs for kube-apiserver [fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef] ...
	I0819 13:52:35.532976  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:35.591317  152452 logs.go:123] Gathering logs for coredns [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d] ...
	I0819 13:52:35.591395  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:35.650002  152452 logs.go:123] Gathering logs for kube-scheduler [c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7] ...
	I0819 13:52:35.650029  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:35.700068  152452 logs.go:123] Gathering logs for kube-proxy [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303] ...
	I0819 13:52:35.700098  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:35.746365  152452 logs.go:123] Gathering logs for kindnet [765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9] ...
	I0819 13:52:35.746394  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:35.798143  152452 logs.go:123] Gathering logs for kubernetes-dashboard [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75] ...
	I0819 13:52:35.798174  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:35.839938  152452 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:35.839967  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:52:35.253341  159859 pod_ready.go:103] pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace has status "Ready":"False"
	I0819 13:52:37.252374  159859 pod_ready.go:82] duration metric: took 4m0.006540359s for pod "metrics-server-6867b74b74-jhj2b" in "kube-system" namespace to be "Ready" ...
	E0819 13:52:37.252406  159859 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 13:52:37.252416  159859 pod_ready.go:39] duration metric: took 4m0.613038721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 13:52:37.252430  159859 api_server.go:52] waiting for apiserver process to appear ...
	I0819 13:52:37.252461  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:37.252519  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:37.293044  159859 cri.go:89] found id: "31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f"
	I0819 13:52:37.293067  159859 cri.go:89] found id: "ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:52:37.293073  159859 cri.go:89] found id: ""
	I0819 13:52:37.293081  159859 logs.go:276] 2 containers: [31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b]
	I0819 13:52:37.293160  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.297219  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.300777  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:37.300897  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:37.341906  159859 cri.go:89] found id: "4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297"
	I0819 13:52:37.341929  159859 cri.go:89] found id: "c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:52:37.341934  159859 cri.go:89] found id: ""
	I0819 13:52:37.341942  159859 logs.go:276] 2 containers: [4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297 c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744]
	I0819 13:52:37.342022  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.346412  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.353190  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:37.353266  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:37.392848  159859 cri.go:89] found id: "28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6"
	I0819 13:52:37.392871  159859 cri.go:89] found id: "6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:52:37.392879  159859 cri.go:89] found id: ""
	I0819 13:52:37.392887  159859 logs.go:276] 2 containers: [28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6 6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd]
	I0819 13:52:37.392964  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.396561  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.399721  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:37.399843  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:37.437523  159859 cri.go:89] found id: "494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc"
	I0819 13:52:37.437549  159859 cri.go:89] found id: "7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:52:37.437554  159859 cri.go:89] found id: ""
	I0819 13:52:37.437561  159859 logs.go:276] 2 containers: [494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc 7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137]
	I0819 13:52:37.437626  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.441414  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.444746  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:37.444815  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:37.482649  159859 cri.go:89] found id: "3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07"
	I0819 13:52:37.482672  159859 cri.go:89] found id: "d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	I0819 13:52:37.482676  159859 cri.go:89] found id: ""
	I0819 13:52:37.482684  159859 logs.go:276] 2 containers: [3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07 d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14]
	I0819 13:52:37.482741  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.486371  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.490148  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:37.490221  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:37.541825  159859 cri.go:89] found id: "661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119"
	I0819 13:52:37.541849  159859 cri.go:89] found id: "7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:52:37.541854  159859 cri.go:89] found id: ""
	I0819 13:52:37.541862  159859 logs.go:276] 2 containers: [661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119 7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad]
	I0819 13:52:37.541920  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.545678  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.549199  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:37.549268  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:37.587433  159859 cri.go:89] found id: "885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c"
	I0819 13:52:37.587495  159859 cri.go:89] found id: "72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:52:37.587525  159859 cri.go:89] found id: ""
	I0819 13:52:37.587548  159859 logs.go:276] 2 containers: [885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c 72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2]
	I0819 13:52:37.587638  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.591405  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.595407  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:37.595487  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:37.656301  159859 cri.go:89] found id: "e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1"
	I0819 13:52:37.656366  159859 cri.go:89] found id: "b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d"
	I0819 13:52:37.656386  159859 cri.go:89] found id: ""
	I0819 13:52:37.656413  159859 logs.go:276] 2 containers: [e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1 b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d]
	I0819 13:52:37.656502  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.660253  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.664137  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:37.664293  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:37.708406  159859 cri.go:89] found id: "8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93"
	I0819 13:52:37.708470  159859 cri.go:89] found id: ""
	I0819 13:52:37.708494  159859 logs.go:276] 1 containers: [8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93]
	I0819 13:52:37.708563  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:37.712304  159859 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:37.712330  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:37.860429  159859 logs.go:123] Gathering logs for kube-apiserver [ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b] ...
	I0819 13:52:37.860461  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:52:37.932440  159859 logs.go:123] Gathering logs for etcd [4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297] ...
	I0819 13:52:37.932477  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297"
	I0819 13:52:37.981770  159859 logs.go:123] Gathering logs for coredns [28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6] ...
	I0819 13:52:37.981805  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6"
	I0819 13:52:38.029337  159859 logs.go:123] Gathering logs for kube-proxy [d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14] ...
	I0819 13:52:38.029371  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	W0819 13:52:35.898330  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505154     660 reflector.go:138] object-"kube-system"/"coredns-token-mgkqs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mgkqs" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.898580  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505275     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vrkdd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vrkdd" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.898804  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505337     660 reflector.go:138] object-"kube-system"/"metrics-server-token-gngrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gngrg" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899032  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505397     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899252  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505458     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-gvnrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gvnrc" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899472  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505515     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899698  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505568     660 reflector.go:138] object-"kube-system"/"kindnet-token-db6v8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-db6v8" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.899941  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505619     660 reflector.go:138] object-"default"/"default-token-ldqq4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldqq4" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:35.907521  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.700916     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.907711  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.923497     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.910569  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:23 old-k8s-version-914579 kubelet[660]: E0819 13:47:23.556884     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.912575  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:38 old-k8s-version-914579 kubelet[660]: E0819 13:47:38.547027     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.913031  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:41 old-k8s-version-914579 kubelet[660]: E0819 13:47:41.081116     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.913488  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:42 old-k8s-version-914579 kubelet[660]: E0819 13:47:42.088354     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.913930  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:43 old-k8s-version-914579 kubelet[660]: E0819 13:47:43.093385     660 pod_workers.go:191] Error syncing pod e088dd49-745a-4473-b25c-b8b1bdef35d2 ("storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"
	W0819 13:52:35.914587  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.143364     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.917029  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.539102     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.917347  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:04 old-k8s-version-914579 kubelet[660]: E0819 13:48:04.530531     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.917951  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:06 old-k8s-version-914579 kubelet[660]: E0819 13:48:06.204897     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.918275  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:10 old-k8s-version-914579 kubelet[660]: E0819 13:48:10.144306     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.918461  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:17 old-k8s-version-914579 kubelet[660]: E0819 13:48:17.533996     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.918787  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:22 old-k8s-version-914579 kubelet[660]: E0819 13:48:22.534681     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.918970  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:28 old-k8s-version-914579 kubelet[660]: E0819 13:48:28.530752     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.919435  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:35 old-k8s-version-914579 kubelet[660]: E0819 13:48:35.289738     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.919895  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.144100     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.922351  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.550260     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.922678  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:53 old-k8s-version-914579 kubelet[660]: E0819 13:48:53.530760     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.922862  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:54 old-k8s-version-914579 kubelet[660]: E0819 13:48:54.530590     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.923186  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:05 old-k8s-version-914579 kubelet[660]: E0819 13:49:05.530108     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.923368  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:06 old-k8s-version-914579 kubelet[660]: E0819 13:49:06.531043     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.923551  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:19 old-k8s-version-914579 kubelet[660]: E0819 13:49:19.533642     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.924153  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:21 old-k8s-version-914579 kubelet[660]: E0819 13:49:21.435392     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.924483  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:30 old-k8s-version-914579 kubelet[660]: E0819 13:49:30.144187     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.924667  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:32 old-k8s-version-914579 kubelet[660]: E0819 13:49:32.530403     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.924993  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:41 old-k8s-version-914579 kubelet[660]: E0819 13:49:41.530796     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.925176  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:44 old-k8s-version-914579 kubelet[660]: E0819 13:49:44.530365     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.925501  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:55 old-k8s-version-914579 kubelet[660]: E0819 13:49:55.530750     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.925688  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:57 old-k8s-version-914579 kubelet[660]: E0819 13:49:57.534986     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.926015  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:07 old-k8s-version-914579 kubelet[660]: E0819 13:50:07.530719     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.928455  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:11 old-k8s-version-914579 kubelet[660]: E0819 13:50:11.538819     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:35.928791  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:22 old-k8s-version-914579 kubelet[660]: E0819 13:50:22.530086     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.928974  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:25 old-k8s-version-914579 kubelet[660]: E0819 13:50:25.531673     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.929326  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:35 old-k8s-version-914579 kubelet[660]: E0819 13:50:35.530288     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.929510  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:38 old-k8s-version-914579 kubelet[660]: E0819 13:50:38.536408     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.929836  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.531928     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.930292  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.676624     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.930620  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:00 old-k8s-version-914579 kubelet[660]: E0819 13:51:00.175960     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.930808  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:05 old-k8s-version-914579 kubelet[660]: E0819 13:51:05.532058     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.931133  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:13 old-k8s-version-914579 kubelet[660]: E0819 13:51:13.534019     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.931316  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:18 old-k8s-version-914579 kubelet[660]: E0819 13:51:18.530498     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.931826  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:25 old-k8s-version-914579 kubelet[660]: E0819 13:51:25.530598     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.932029  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:31 old-k8s-version-914579 kubelet[660]: E0819 13:51:31.530699     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.932414  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:36 old-k8s-version-914579 kubelet[660]: E0819 13:51:36.530142     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.932607  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:42 old-k8s-version-914579 kubelet[660]: E0819 13:51:42.530485     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.932933  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:48 old-k8s-version-914579 kubelet[660]: E0819 13:51:48.530149     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.933116  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:54 old-k8s-version-914579 kubelet[660]: E0819 13:51:54.530585     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.933493  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:02 old-k8s-version-914579 kubelet[660]: E0819 13:52:02.531227     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.933689  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.934037  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.934225  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:35.934560  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:35.934746  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:35.934760  152452 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:35.934778  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:36.093366  152452 logs.go:123] Gathering logs for etcd [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d] ...
	I0819 13:52:36.093398  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:36.140421  152452 logs.go:123] Gathering logs for kube-controller-manager [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe] ...
	I0819 13:52:36.140454  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:36.207906  152452 logs.go:123] Gathering logs for kube-apiserver [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234] ...
	I0819 13:52:36.207941  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:36.281081  152452 logs.go:123] Gathering logs for etcd [6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d] ...
	I0819 13:52:36.281117  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:36.371336  152452 logs.go:123] Gathering logs for container status ...
	I0819 13:52:36.371370  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:36.423185  152452 logs.go:123] Gathering logs for kube-controller-manager [ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e] ...
	I0819 13:52:36.423217  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:36.477465  152452 logs.go:123] Gathering logs for kindnet [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4] ...
	I0819 13:52:36.477500  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:36.551026  152452 logs.go:123] Gathering logs for storage-provisioner [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b] ...
	I0819 13:52:36.551090  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:36.605781  152452 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:36.605810  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:36.625534  152452 logs.go:123] Gathering logs for coredns [8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5] ...
	I0819 13:52:36.625564  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:36.665220  152452 logs.go:123] Gathering logs for kube-scheduler [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c] ...
	I0819 13:52:36.665250  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:36.706249  152452 logs.go:123] Gathering logs for kube-proxy [d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28] ...
	I0819 13:52:36.706279  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:36.751297  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:36.751324  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 13:52:36.751378  152452 out.go:270] X Problems detected in kubelet:
	W0819 13:52:36.751394  152452 out.go:270]   Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:36.751404  152452 out.go:270]   Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:36.751419  152452 out.go:270]   Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:36.751425  152452 out.go:270]   Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:36.751434  152452 out.go:270]   Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:36.751439  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:36.751445  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:52:38.073242  159859 logs.go:123] Gathering logs for kube-controller-manager [661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119] ...
	I0819 13:52:38.073270  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119"
	I0819 13:52:38.151860  159859 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:38.151894  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:52:38.238961  159859 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:38.238999  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:38.264133  159859 logs.go:123] Gathering logs for kube-apiserver [31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f] ...
	I0819 13:52:38.264165  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f"
	I0819 13:52:38.337136  159859 logs.go:123] Gathering logs for kube-scheduler [494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc] ...
	I0819 13:52:38.337169  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc"
	I0819 13:52:38.385388  159859 logs.go:123] Gathering logs for kube-proxy [3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07] ...
	I0819 13:52:38.385420  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07"
	I0819 13:52:38.428817  159859 logs.go:123] Gathering logs for storage-provisioner [e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1] ...
	I0819 13:52:38.428848  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1"
	I0819 13:52:38.469504  159859 logs.go:123] Gathering logs for coredns [6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd] ...
	I0819 13:52:38.469575  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:52:38.509318  159859 logs.go:123] Gathering logs for kindnet [72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2] ...
	I0819 13:52:38.509351  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:52:38.561112  159859 logs.go:123] Gathering logs for container status ...
	I0819 13:52:38.561145  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:38.617116  159859 logs.go:123] Gathering logs for etcd [c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744] ...
	I0819 13:52:38.617146  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:52:38.661490  159859 logs.go:123] Gathering logs for kube-scheduler [7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137] ...
	I0819 13:52:38.661526  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:52:38.709998  159859 logs.go:123] Gathering logs for kube-controller-manager [7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad] ...
	I0819 13:52:38.710032  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:52:38.772527  159859 logs.go:123] Gathering logs for kindnet [885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c] ...
	I0819 13:52:38.772563  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c"
	I0819 13:52:38.826699  159859 logs.go:123] Gathering logs for storage-provisioner [b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d] ...
	I0819 13:52:38.826736  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d"
	I0819 13:52:38.867507  159859 logs.go:123] Gathering logs for kubernetes-dashboard [8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93] ...
	I0819 13:52:38.867537  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93"
	I0819 13:52:38.909543  159859 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:38.909573  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:41.475324  159859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:52:41.495861  159859 api_server.go:72] duration metric: took 4m10.389757829s to wait for apiserver process to appear ...
	I0819 13:52:41.495888  159859 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:52:41.495923  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:41.495981  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:41.564966  159859 cri.go:89] found id: "31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f"
	I0819 13:52:41.564987  159859 cri.go:89] found id: "ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:52:41.564992  159859 cri.go:89] found id: ""
	I0819 13:52:41.564999  159859 logs.go:276] 2 containers: [31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b]
	I0819 13:52:41.565068  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.569835  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.578711  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:41.578780  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:41.626485  159859 cri.go:89] found id: "4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297"
	I0819 13:52:41.626505  159859 cri.go:89] found id: "c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:52:41.626509  159859 cri.go:89] found id: ""
	I0819 13:52:41.626517  159859 logs.go:276] 2 containers: [4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297 c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744]
	I0819 13:52:41.626577  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.630642  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.634732  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:41.634806  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:41.688049  159859 cri.go:89] found id: "28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6"
	I0819 13:52:41.688069  159859 cri.go:89] found id: "6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:52:41.688074  159859 cri.go:89] found id: ""
	I0819 13:52:41.688081  159859 logs.go:276] 2 containers: [28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6 6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd]
	I0819 13:52:41.688137  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.691553  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.695222  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:41.695291  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:41.736857  159859 cri.go:89] found id: "494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc"
	I0819 13:52:41.736880  159859 cri.go:89] found id: "7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:52:41.736885  159859 cri.go:89] found id: ""
	I0819 13:52:41.736893  159859 logs.go:276] 2 containers: [494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc 7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137]
	I0819 13:52:41.736951  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.740660  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.744110  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:41.744188  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:41.786237  159859 cri.go:89] found id: "3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07"
	I0819 13:52:41.786262  159859 cri.go:89] found id: "d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	I0819 13:52:41.786267  159859 cri.go:89] found id: ""
	I0819 13:52:41.786274  159859 logs.go:276] 2 containers: [3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07 d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14]
	I0819 13:52:41.786337  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.790247  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.793921  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:41.793994  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:41.833098  159859 cri.go:89] found id: "661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119"
	I0819 13:52:41.833122  159859 cri.go:89] found id: "7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:52:41.833127  159859 cri.go:89] found id: ""
	I0819 13:52:41.833135  159859 logs.go:276] 2 containers: [661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119 7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad]
	I0819 13:52:41.833191  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.837201  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.840789  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:41.840905  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:41.880178  159859 cri.go:89] found id: "885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c"
	I0819 13:52:41.880252  159859 cri.go:89] found id: "72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:52:41.880263  159859 cri.go:89] found id: ""
	I0819 13:52:41.880272  159859 logs.go:276] 2 containers: [885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c 72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2]
	I0819 13:52:41.880350  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.884179  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.888464  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:41.888545  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:41.932211  159859 cri.go:89] found id: "8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93"
	I0819 13:52:41.932278  159859 cri.go:89] found id: ""
	I0819 13:52:41.932301  159859 logs.go:276] 1 containers: [8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93]
	I0819 13:52:41.932393  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.936233  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:41.936347  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:41.975658  159859 cri.go:89] found id: "e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1"
	I0819 13:52:41.975725  159859 cri.go:89] found id: "b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d"
	I0819 13:52:41.975745  159859 cri.go:89] found id: ""
	I0819 13:52:41.975771  159859 logs.go:276] 2 containers: [e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1 b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d]
	I0819 13:52:41.975906  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.979561  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:41.983120  159859 logs.go:123] Gathering logs for etcd [c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744] ...
	I0819 13:52:41.983188  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:52:42.046470  159859 logs.go:123] Gathering logs for kube-scheduler [494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc] ...
	I0819 13:52:42.046503  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc"
	I0819 13:52:42.097877  159859 logs.go:123] Gathering logs for kube-proxy [3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07] ...
	I0819 13:52:42.097911  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07"
	I0819 13:52:42.167030  159859 logs.go:123] Gathering logs for kube-proxy [d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14] ...
	I0819 13:52:42.167065  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	I0819 13:52:42.227264  159859 logs.go:123] Gathering logs for kube-controller-manager [661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119] ...
	I0819 13:52:42.227300  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119"
	I0819 13:52:42.328651  159859 logs.go:123] Gathering logs for kindnet [885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c] ...
	I0819 13:52:42.328697  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c"
	I0819 13:52:42.396459  159859 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:42.396505  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:52:42.474093  159859 logs.go:123] Gathering logs for kube-apiserver [31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f] ...
	I0819 13:52:42.474130  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f"
	I0819 13:52:42.556358  159859 logs.go:123] Gathering logs for kubernetes-dashboard [8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93] ...
	I0819 13:52:42.556394  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93"
	I0819 13:52:42.600708  159859 logs.go:123] Gathering logs for etcd [4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297] ...
	I0819 13:52:42.600740  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297"
	I0819 13:52:42.653918  159859 logs.go:123] Gathering logs for kube-apiserver [ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b] ...
	I0819 13:52:42.653952  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:52:42.701524  159859 logs.go:123] Gathering logs for coredns [6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd] ...
	I0819 13:52:42.701556  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:52:42.754659  159859 logs.go:123] Gathering logs for storage-provisioner [e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1] ...
	I0819 13:52:42.754687  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1"
	I0819 13:52:42.795685  159859 logs.go:123] Gathering logs for storage-provisioner [b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d] ...
	I0819 13:52:42.795714  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d"
	I0819 13:52:42.835156  159859 logs.go:123] Gathering logs for container status ...
	I0819 13:52:42.835186  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:42.884347  159859 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:42.884378  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:42.901388  159859 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:42.901416  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:43.029106  159859 logs.go:123] Gathering logs for kube-controller-manager [7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad] ...
	I0819 13:52:43.029144  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:52:43.089701  159859 logs.go:123] Gathering logs for kindnet [72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2] ...
	I0819 13:52:43.089739  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:52:43.141888  159859 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:43.141925  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:43.209170  159859 logs.go:123] Gathering logs for coredns [28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6] ...
	I0819 13:52:43.209207  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6"
	I0819 13:52:43.259184  159859 logs.go:123] Gathering logs for kube-scheduler [7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137] ...
	I0819 13:52:43.259215  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:52:45.807200  159859 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0819 13:52:45.815496  159859 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0819 13:52:45.817022  159859 api_server.go:141] control plane version: v1.31.0
	I0819 13:52:45.817050  159859 api_server.go:131] duration metric: took 4.321155132s to wait for apiserver health ...
	I0819 13:52:45.817061  159859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 13:52:45.817087  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:45.817163  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:45.856631  159859 cri.go:89] found id: "31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f"
	I0819 13:52:45.856654  159859 cri.go:89] found id: "ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:52:45.856660  159859 cri.go:89] found id: ""
	I0819 13:52:45.856667  159859 logs.go:276] 2 containers: [31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b]
	I0819 13:52:45.856725  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:45.860463  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:45.864546  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:45.864622  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:45.904207  159859 cri.go:89] found id: "4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297"
	I0819 13:52:45.904232  159859 cri.go:89] found id: "c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:52:45.904237  159859 cri.go:89] found id: ""
	I0819 13:52:45.904245  159859 logs.go:276] 2 containers: [4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297 c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744]
	I0819 13:52:45.904306  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:45.908147  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:45.911741  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:45.911859  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:45.951384  159859 cri.go:89] found id: "28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6"
	I0819 13:52:45.951410  159859 cri.go:89] found id: "6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:52:45.951415  159859 cri.go:89] found id: ""
	I0819 13:52:45.951423  159859 logs.go:276] 2 containers: [28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6 6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd]
	I0819 13:52:45.951480  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:45.955479  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:45.959281  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:45.959355  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:46.001880  159859 cri.go:89] found id: "494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc"
	I0819 13:52:46.001902  159859 cri.go:89] found id: "7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:52:46.001907  159859 cri.go:89] found id: ""
	I0819 13:52:46.001916  159859 logs.go:276] 2 containers: [494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc 7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137]
	I0819 13:52:46.001999  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.013111  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.017550  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:46.017642  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:46.065187  159859 cri.go:89] found id: "3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07"
	I0819 13:52:46.065211  159859 cri.go:89] found id: "d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	I0819 13:52:46.065216  159859 cri.go:89] found id: ""
	I0819 13:52:46.065224  159859 logs.go:276] 2 containers: [3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07 d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14]
	I0819 13:52:46.065281  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.069148  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.072779  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:46.072882  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:46.117738  159859 cri.go:89] found id: "661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119"
	I0819 13:52:46.117815  159859 cri.go:89] found id: "7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:52:46.117828  159859 cri.go:89] found id: ""
	I0819 13:52:46.117836  159859 logs.go:276] 2 containers: [661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119 7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad]
	I0819 13:52:46.117909  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.121800  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.125875  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:46.125992  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:46.175766  159859 cri.go:89] found id: "885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c"
	I0819 13:52:46.175894  159859 cri.go:89] found id: "72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:52:46.175915  159859 cri.go:89] found id: ""
	I0819 13:52:46.175942  159859 logs.go:276] 2 containers: [885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c 72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2]
	I0819 13:52:46.176038  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.179893  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.184065  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:46.184181  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:46.224013  159859 cri.go:89] found id: "8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93"
	I0819 13:52:46.224038  159859 cri.go:89] found id: ""
	I0819 13:52:46.224047  159859 logs.go:276] 1 containers: [8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93]
	I0819 13:52:46.224106  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.228007  159859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:46.228084  159859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:46.267226  159859 cri.go:89] found id: "e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1"
	I0819 13:52:46.267253  159859 cri.go:89] found id: "b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d"
	I0819 13:52:46.267259  159859 cri.go:89] found id: ""
	I0819 13:52:46.267266  159859 logs.go:276] 2 containers: [e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1 b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d]
	I0819 13:52:46.267329  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.271227  159859 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.274718  159859 logs.go:123] Gathering logs for container status ...
	I0819 13:52:46.274787  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:46.332956  159859 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:46.332983  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 13:52:46.413857  159859 logs.go:123] Gathering logs for kube-scheduler [7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137] ...
	I0819 13:52:46.413895  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7466e02589549507e8614ae43605f4e6f4605606ee4e61167b26d408b8078137"
	I0819 13:52:46.466854  159859 logs.go:123] Gathering logs for storage-provisioner [b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d] ...
	I0819 13:52:46.466884  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b48d9970f1c04b6885e9082b0ebb5f77664385fc74bb82287e4a67e2257afc4d"
	I0819 13:52:46.510357  159859 logs.go:123] Gathering logs for kube-proxy [d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14] ...
	I0819 13:52:46.510384  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d144fb57c1f864d402cb71f0708356473c2ebcf103514d671338129531d25e14"
	I0819 13:52:46.551172  159859 logs.go:123] Gathering logs for kindnet [72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2] ...
	I0819 13:52:46.551204  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72f86ce2e020cf467a960cedb19e56cde7e0f6cf8509a46304c45072f11a64a2"
	I0819 13:52:46.609693  159859 logs.go:123] Gathering logs for storage-provisioner [e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1] ...
	I0819 13:52:46.609725  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e30b6d2a2ef745c854e592691b57aac843e7d42c5fe1b6897816cbdb178742b1"
	I0819 13:52:46.661281  159859 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:46.661311  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:46.678849  159859 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:46.678924  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:46.841121  159859 logs.go:123] Gathering logs for coredns [28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6] ...
	I0819 13:52:46.841171  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28d9baa2b6e93e092f31f623ea40f5dc0b3c07d7117c2886a7a9a34666c927c6"
	I0819 13:52:46.911212  159859 logs.go:123] Gathering logs for kube-controller-manager [661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119] ...
	I0819 13:52:46.911242  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 661f3f8d6b252e80a5c64d55369634b0a08dbd0a66c8d22573770fe6a428c119"
	I0819 13:52:47.004071  159859 logs.go:123] Gathering logs for kindnet [885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c] ...
	I0819 13:52:47.004122  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 885b37e08c80c553570637679785115f4c325b8bec8dc444bd754dab1ee3ce7c"
	I0819 13:52:47.135649  159859 logs.go:123] Gathering logs for kubernetes-dashboard [8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93] ...
	I0819 13:52:47.135741  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e6cdd6bed0c2b81a41d0803db003f1d2f29612971965d14486dd8440d84ae93"
	I0819 13:52:47.200538  159859 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:47.200618  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:47.280278  159859 logs.go:123] Gathering logs for etcd [4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297] ...
	I0819 13:52:47.280360  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e1354ba690cd91d28549484fc0f3383f07194d995caa4d76f819da5e993a297"
	I0819 13:52:47.342950  159859 logs.go:123] Gathering logs for etcd [c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744] ...
	I0819 13:52:47.343011  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2667c90b1c00f50d02ddaf922cd598827148598792fca8aa690697e65a98744"
	I0819 13:52:47.433712  159859 logs.go:123] Gathering logs for kube-proxy [3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07] ...
	I0819 13:52:47.433772  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fabea8e04f6abca30b247c4889af7e3c503181c02673fe407b6feebf8604a07"
	I0819 13:52:47.500790  159859 logs.go:123] Gathering logs for kube-scheduler [494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc] ...
	I0819 13:52:47.500818  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 494769b089cf99a12deae58e0f27768f8a83fee5c330bd4d796e04286252e9cc"
	I0819 13:52:47.566508  159859 logs.go:123] Gathering logs for kube-controller-manager [7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad] ...
	I0819 13:52:47.566540  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7dd9e79429542dbb8adda3b3a2334949bacde3c045dfd5a276eb9ece3d9f29ad"
	I0819 13:52:47.665499  159859 logs.go:123] Gathering logs for kube-apiserver [31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f] ...
	I0819 13:52:47.665544  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31427da9f6f535c6d8c0bfe81742ea748ad4b6e794d6f5a32b75e28cfdda811f"
	I0819 13:52:47.738075  159859 logs.go:123] Gathering logs for kube-apiserver [ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b] ...
	I0819 13:52:47.738115  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22ac16052d2c43ed9793c39a004d6a1901a8669144d994e07415585f265b2b"
	I0819 13:52:47.822848  159859 logs.go:123] Gathering logs for coredns [6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd] ...
	I0819 13:52:47.822901  159859 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f5b16f013f8d9677cb7b065862e53df5d82fd3fc59a75567ab2b934b923bafd"
	I0819 13:52:50.383104  159859 system_pods.go:59] 9 kube-system pods found
	I0819 13:52:50.383156  159859 system_pods.go:61] "coredns-6f6b679f8f-fgrp5" [94c8ef36-d04b-432e-aadb-beadb7c4dffe] Running
	I0819 13:52:50.383163  159859 system_pods.go:61] "etcd-no-preload-895877" [4000f8d0-786d-46ca-80b2-fbcf4ff30c4f] Running
	I0819 13:52:50.383169  159859 system_pods.go:61] "kindnet-rcssr" [d471cb38-8e84-4403-81f4-cc5da62cb710] Running
	I0819 13:52:50.383174  159859 system_pods.go:61] "kube-apiserver-no-preload-895877" [c74151df-7e44-4a81-b1ca-848faa86f9fc] Running
	I0819 13:52:50.383178  159859 system_pods.go:61] "kube-controller-manager-no-preload-895877" [1e0723f7-35e2-4b3b-9a07-5edb4fbff997] Running
	I0819 13:52:50.383182  159859 system_pods.go:61] "kube-proxy-9q48v" [f30af7ca-281a-463e-8804-1becdd05a8ea] Running
	I0819 13:52:50.383186  159859 system_pods.go:61] "kube-scheduler-no-preload-895877" [75de3346-94d2-4d32-b92a-6223bc6e3830] Running
	I0819 13:52:50.383203  159859 system_pods.go:61] "metrics-server-6867b74b74-jhj2b" [38b73b25-64d5-4920-a9a7-24824f300411] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:52:50.383207  159859 system_pods.go:61] "storage-provisioner" [f2fc4812-201e-48d1-9c20-ad64b955f7a5] Running
	I0819 13:52:50.383215  159859 system_pods.go:74] duration metric: took 4.566148261s to wait for pod list to return data ...
	I0819 13:52:50.383228  159859 default_sa.go:34] waiting for default service account to be created ...
	I0819 13:52:50.386892  159859 default_sa.go:45] found service account: "default"
	I0819 13:52:50.386919  159859 default_sa.go:55] duration metric: took 3.68357ms for default service account to be created ...
	I0819 13:52:50.386929  159859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 13:52:50.393143  159859 system_pods.go:86] 9 kube-system pods found
	I0819 13:52:50.393194  159859 system_pods.go:89] "coredns-6f6b679f8f-fgrp5" [94c8ef36-d04b-432e-aadb-beadb7c4dffe] Running
	I0819 13:52:50.393202  159859 system_pods.go:89] "etcd-no-preload-895877" [4000f8d0-786d-46ca-80b2-fbcf4ff30c4f] Running
	I0819 13:52:50.393207  159859 system_pods.go:89] "kindnet-rcssr" [d471cb38-8e84-4403-81f4-cc5da62cb710] Running
	I0819 13:52:50.393212  159859 system_pods.go:89] "kube-apiserver-no-preload-895877" [c74151df-7e44-4a81-b1ca-848faa86f9fc] Running
	I0819 13:52:50.393218  159859 system_pods.go:89] "kube-controller-manager-no-preload-895877" [1e0723f7-35e2-4b3b-9a07-5edb4fbff997] Running
	I0819 13:52:50.393228  159859 system_pods.go:89] "kube-proxy-9q48v" [f30af7ca-281a-463e-8804-1becdd05a8ea] Running
	I0819 13:52:50.393232  159859 system_pods.go:89] "kube-scheduler-no-preload-895877" [75de3346-94d2-4d32-b92a-6223bc6e3830] Running
	I0819 13:52:50.393240  159859 system_pods.go:89] "metrics-server-6867b74b74-jhj2b" [38b73b25-64d5-4920-a9a7-24824f300411] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 13:52:50.393257  159859 system_pods.go:89] "storage-provisioner" [f2fc4812-201e-48d1-9c20-ad64b955f7a5] Running
	I0819 13:52:50.393266  159859 system_pods.go:126] duration metric: took 6.330982ms to wait for k8s-apps to be running ...
	I0819 13:52:50.393287  159859 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 13:52:50.393354  159859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:52:50.405536  159859 system_svc.go:56] duration metric: took 12.241618ms WaitForService to wait for kubelet
	I0819 13:52:50.405575  159859 kubeadm.go:582] duration metric: took 4m19.299477613s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 13:52:50.405606  159859 node_conditions.go:102] verifying NodePressure condition ...
	I0819 13:52:50.408805  159859 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 13:52:50.408852  159859 node_conditions.go:123] node cpu capacity is 2
	I0819 13:52:50.408865  159859 node_conditions.go:105] duration metric: took 3.250984ms to run NodePressure ...
	I0819 13:52:50.408879  159859 start.go:241] waiting for startup goroutines ...
	I0819 13:52:50.408886  159859 start.go:246] waiting for cluster config update ...
	I0819 13:52:50.408898  159859 start.go:255] writing updated cluster config ...
	I0819 13:52:50.409189  159859 ssh_runner.go:195] Run: rm -f paused
	I0819 13:52:50.472652  159859 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 13:52:50.475887  159859 out.go:177] * Done! kubectl is now configured to use "no-preload-895877" cluster and "default" namespace by default
	I0819 13:52:46.752339  152452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:52:46.770079  152452 api_server.go:72] duration metric: took 5m57.199385315s to wait for apiserver process to appear ...
	I0819 13:52:46.770108  152452 api_server.go:88] waiting for apiserver healthz status ...
	I0819 13:52:46.770164  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 13:52:46.770245  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 13:52:46.856422  152452 cri.go:89] found id: "b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:46.856452  152452 cri.go:89] found id: "fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:46.856457  152452 cri.go:89] found id: ""
	I0819 13:52:46.856466  152452 logs.go:276] 2 containers: [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef]
	I0819 13:52:46.856537  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.860669  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.865075  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 13:52:46.865149  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 13:52:46.936606  152452 cri.go:89] found id: "a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:46.936634  152452 cri.go:89] found id: "6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:46.936640  152452 cri.go:89] found id: ""
	I0819 13:52:46.936648  152452 logs.go:276] 2 containers: [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d]
	I0819 13:52:46.936720  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.942994  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:46.948231  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 13:52:46.948310  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 13:52:47.020145  152452 cri.go:89] found id: "2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:47.020172  152452 cri.go:89] found id: "8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:47.020177  152452 cri.go:89] found id: ""
	I0819 13:52:47.020188  152452 logs.go:276] 2 containers: [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5]
	I0819 13:52:47.020279  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.024967  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.034264  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 13:52:47.034360  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 13:52:47.096009  152452 cri.go:89] found id: "db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:47.096031  152452 cri.go:89] found id: "c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:47.096036  152452 cri.go:89] found id: ""
	I0819 13:52:47.096043  152452 logs.go:276] 2 containers: [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7]
	I0819 13:52:47.096111  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.102562  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.109207  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 13:52:47.109283  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 13:52:47.184806  152452 cri.go:89] found id: "024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:47.184828  152452 cri.go:89] found id: "d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:47.184833  152452 cri.go:89] found id: ""
	I0819 13:52:47.184842  152452 logs.go:276] 2 containers: [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28]
	I0819 13:52:47.184903  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.189384  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.194364  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 13:52:47.194448  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 13:52:47.265188  152452 cri.go:89] found id: "fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:47.265218  152452 cri.go:89] found id: "ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:47.265224  152452 cri.go:89] found id: ""
	I0819 13:52:47.265231  152452 logs.go:276] 2 containers: [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e]
	I0819 13:52:47.265303  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.269862  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.273980  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 13:52:47.274061  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 13:52:47.338484  152452 cri.go:89] found id: "c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:47.338509  152452 cri.go:89] found id: "765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:47.338515  152452 cri.go:89] found id: ""
	I0819 13:52:47.338522  152452 logs.go:276] 2 containers: [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9]
	I0819 13:52:47.338580  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.343202  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.370668  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 13:52:47.370745  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 13:52:47.432108  152452 cri.go:89] found id: "65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:47.432132  152452 cri.go:89] found id: "13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:47.432137  152452 cri.go:89] found id: ""
	I0819 13:52:47.432160  152452 logs.go:276] 2 containers: [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa]
	I0819 13:52:47.432231  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.438213  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.446636  152452 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 13:52:47.446722  152452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 13:52:47.510037  152452 cri.go:89] found id: "15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:47.510060  152452 cri.go:89] found id: ""
	I0819 13:52:47.510069  152452 logs.go:276] 1 containers: [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75]
	I0819 13:52:47.510144  152452 ssh_runner.go:195] Run: which crictl
	I0819 13:52:47.515774  152452 logs.go:123] Gathering logs for etcd [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d] ...
	I0819 13:52:47.515814  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d"
	I0819 13:52:47.636906  152452 logs.go:123] Gathering logs for storage-provisioner [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b] ...
	I0819 13:52:47.636943  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b"
	I0819 13:52:47.704121  152452 logs.go:123] Gathering logs for kubernetes-dashboard [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75] ...
	I0819 13:52:47.704152  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75"
	I0819 13:52:47.805660  152452 logs.go:123] Gathering logs for container status ...
	I0819 13:52:47.805689  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 13:52:47.889024  152452 logs.go:123] Gathering logs for kubelet ...
	I0819 13:52:47.889055  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 13:52:47.950679  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505154     660 reflector.go:138] object-"kube-system"/"coredns-token-mgkqs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mgkqs" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.950954  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505275     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vrkdd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vrkdd" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951176  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505337     660 reflector.go:138] object-"kube-system"/"metrics-server-token-gngrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-gngrg" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951381  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505397     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951596  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505458     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-gvnrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gvnrc" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.951821  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505515     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.952031  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505568     660 reflector.go:138] object-"kube-system"/"kindnet-token-db6v8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-db6v8" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.952238  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:08 old-k8s-version-914579 kubelet[660]: E0819 13:47:08.505619     660 reflector.go:138] object-"default"/"default-token-ldqq4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldqq4" is forbidden: User "system:node:old-k8s-version-914579" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-914579' and this object
	W0819 13:52:47.959838  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.700916     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.960028  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:11 old-k8s-version-914579 kubelet[660]: E0819 13:47:11.923497     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.962801  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:23 old-k8s-version-914579 kubelet[660]: E0819 13:47:23.556884     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.964806  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:38 old-k8s-version-914579 kubelet[660]: E0819 13:47:38.547027     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.965259  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:41 old-k8s-version-914579 kubelet[660]: E0819 13:47:41.081116     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.965718  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:42 old-k8s-version-914579 kubelet[660]: E0819 13:47:42.088354     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.966157  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:43 old-k8s-version-914579 kubelet[660]: E0819 13:47:43.093385     660 pod_workers.go:191] Error syncing pod e088dd49-745a-4473-b25c-b8b1bdef35d2 ("storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e088dd49-745a-4473-b25c-b8b1bdef35d2)"
	W0819 13:52:47.966810  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.143364     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.969243  152452 logs.go:138] Found kubelet problem: Aug 19 13:47:50 old-k8s-version-914579 kubelet[660]: E0819 13:47:50.539102     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.969559  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:04 old-k8s-version-914579 kubelet[660]: E0819 13:48:04.530531     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.970145  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:06 old-k8s-version-914579 kubelet[660]: E0819 13:48:06.204897     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.970473  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:10 old-k8s-version-914579 kubelet[660]: E0819 13:48:10.144306     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.970654  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:17 old-k8s-version-914579 kubelet[660]: E0819 13:48:17.533996     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.970983  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:22 old-k8s-version-914579 kubelet[660]: E0819 13:48:22.534681     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.971164  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:28 old-k8s-version-914579 kubelet[660]: E0819 13:48:28.530752     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.971618  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:35 old-k8s-version-914579 kubelet[660]: E0819 13:48:35.289738     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.972083  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.144100     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.974502  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:40 old-k8s-version-914579 kubelet[660]: E0819 13:48:40.550260     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.974827  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:53 old-k8s-version-914579 kubelet[660]: E0819 13:48:53.530760     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.975011  152452 logs.go:138] Found kubelet problem: Aug 19 13:48:54 old-k8s-version-914579 kubelet[660]: E0819 13:48:54.530590     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.975335  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:05 old-k8s-version-914579 kubelet[660]: E0819 13:49:05.530108     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.975517  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:06 old-k8s-version-914579 kubelet[660]: E0819 13:49:06.531043     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.975699  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:19 old-k8s-version-914579 kubelet[660]: E0819 13:49:19.533642     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.976287  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:21 old-k8s-version-914579 kubelet[660]: E0819 13:49:21.435392     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.976615  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:30 old-k8s-version-914579 kubelet[660]: E0819 13:49:30.144187     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.976800  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:32 old-k8s-version-914579 kubelet[660]: E0819 13:49:32.530403     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.977129  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:41 old-k8s-version-914579 kubelet[660]: E0819 13:49:41.530796     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.977314  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:44 old-k8s-version-914579 kubelet[660]: E0819 13:49:44.530365     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.977643  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:55 old-k8s-version-914579 kubelet[660]: E0819 13:49:55.530750     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.977826  152452 logs.go:138] Found kubelet problem: Aug 19 13:49:57 old-k8s-version-914579 kubelet[660]: E0819 13:49:57.534986     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.978150  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:07 old-k8s-version-914579 kubelet[660]: E0819 13:50:07.530719     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.980581  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:11 old-k8s-version-914579 kubelet[660]: E0819 13:50:11.538819     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0819 13:52:47.980908  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:22 old-k8s-version-914579 kubelet[660]: E0819 13:50:22.530086     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.981091  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:25 old-k8s-version-914579 kubelet[660]: E0819 13:50:25.531673     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.981423  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:35 old-k8s-version-914579 kubelet[660]: E0819 13:50:35.530288     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.981610  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:38 old-k8s-version-914579 kubelet[660]: E0819 13:50:38.536408     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.981975  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.531928     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.982431  152452 logs.go:138] Found kubelet problem: Aug 19 13:50:50 old-k8s-version-914579 kubelet[660]: E0819 13:50:50.676624     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.982758  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:00 old-k8s-version-914579 kubelet[660]: E0819 13:51:00.175960     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.982940  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:05 old-k8s-version-914579 kubelet[660]: E0819 13:51:05.532058     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.983266  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:13 old-k8s-version-914579 kubelet[660]: E0819 13:51:13.534019     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.983448  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:18 old-k8s-version-914579 kubelet[660]: E0819 13:51:18.530498     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.983773  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:25 old-k8s-version-914579 kubelet[660]: E0819 13:51:25.530598     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.983961  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:31 old-k8s-version-914579 kubelet[660]: E0819 13:51:31.530699     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.984287  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:36 old-k8s-version-914579 kubelet[660]: E0819 13:51:36.530142     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.984471  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:42 old-k8s-version-914579 kubelet[660]: E0819 13:51:42.530485     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.984797  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:48 old-k8s-version-914579 kubelet[660]: E0819 13:51:48.530149     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.984991  152452 logs.go:138] Found kubelet problem: Aug 19 13:51:54 old-k8s-version-914579 kubelet[660]: E0819 13:51:54.530585     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.985325  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:02 old-k8s-version-914579 kubelet[660]: E0819 13:52:02.531227     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.985511  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.985850  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.986033  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.986358  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.986541  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:47.986866  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532025     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:47.987049  152452 logs.go:138] Found kubelet problem: Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532731     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:47.987058  152452 logs.go:123] Gathering logs for kube-apiserver [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234] ...
	I0819 13:52:47.987074  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234"
	I0819 13:52:48.072238  152452 logs.go:123] Gathering logs for kube-apiserver [fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef] ...
	I0819 13:52:48.072277  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef"
	I0819 13:52:48.144742  152452 logs.go:123] Gathering logs for coredns [8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5] ...
	I0819 13:52:48.144795  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5"
	I0819 13:52:48.193011  152452 logs.go:123] Gathering logs for dmesg ...
	I0819 13:52:48.193038  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 13:52:48.210157  152452 logs.go:123] Gathering logs for describe nodes ...
	I0819 13:52:48.210198  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 13:52:48.374447  152452 logs.go:123] Gathering logs for kube-controller-manager [ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e] ...
	I0819 13:52:48.374478  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e"
	I0819 13:52:48.431668  152452 logs.go:123] Gathering logs for kindnet [765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9] ...
	I0819 13:52:48.431701  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9"
	I0819 13:52:48.487205  152452 logs.go:123] Gathering logs for storage-provisioner [13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa] ...
	I0819 13:52:48.487242  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa"
	I0819 13:52:48.547481  152452 logs.go:123] Gathering logs for containerd ...
	I0819 13:52:48.547514  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 13:52:48.610774  152452 logs.go:123] Gathering logs for etcd [6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d] ...
	I0819 13:52:48.610815  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d"
	I0819 13:52:48.676770  152452 logs.go:123] Gathering logs for coredns [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d] ...
	I0819 13:52:48.676811  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d"
	I0819 13:52:48.719917  152452 logs.go:123] Gathering logs for kube-scheduler [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c] ...
	I0819 13:52:48.719995  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c"
	I0819 13:52:48.764294  152452 logs.go:123] Gathering logs for kube-scheduler [c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7] ...
	I0819 13:52:48.764367  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7"
	I0819 13:52:48.807362  152452 logs.go:123] Gathering logs for kube-proxy [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303] ...
	I0819 13:52:48.807437  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303"
	I0819 13:52:48.846905  152452 logs.go:123] Gathering logs for kube-proxy [d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28] ...
	I0819 13:52:48.846935  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28"
	I0819 13:52:48.887930  152452 logs.go:123] Gathering logs for kube-controller-manager [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe] ...
	I0819 13:52:48.887962  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe"
	I0819 13:52:48.948162  152452 logs.go:123] Gathering logs for kindnet [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4] ...
	I0819 13:52:48.948198  152452 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4"
	I0819 13:52:49.017994  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:49.018028  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 13:52:49.018115  152452 out.go:270] X Problems detected in kubelet:
	W0819 13:52:49.018145  152452 out.go:270]   Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:49.018161  152452 out.go:270]   Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:49.018169  152452 out.go:270]   Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 13:52:49.018195  152452 out.go:270]   Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532025     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	W0819 13:52:49.018204  152452 out.go:270]   Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532731     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0819 13:52:49.018210  152452 out.go:358] Setting ErrFile to fd 2...
	I0819 13:52:49.018223  152452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:52:59.019588  152452 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0819 13:52:59.031741  152452 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0819 13:52:59.034833  152452 out.go:201] 
	W0819 13:52:59.037405  152452 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0819 13:52:59.037441  152452 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0819 13:52:59.037463  152452 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0819 13:52:59.037469  152452 out.go:270] * 
	W0819 13:52:59.038452  152452 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 13:52:59.042159  152452 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	1d8cf51e8e1df       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   5ad7dde35b05d       dashboard-metrics-scraper-8d5bb5db8-dtszp
	65c22339ea6f7       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   c8328693f153b       storage-provisioner
	15bd5cc6d84b0       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   4a3b75c207302       kubernetes-dashboard-cd95d586-xrfgh
	2c334d0b02f94       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   919af30cf66b2       coredns-74ff55c5b-8qwdj
	c846487e7ff85       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   47c8269345439       kindnet-mn7s7
	13a46bd05c3c5       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   c8328693f153b       storage-provisioner
	9698d20ebef3f       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   2ea2f885124f4       busybox
	024882e3d4a5a       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   91d93e5e2dfbf       kube-proxy-h74p7
	b779c421112b8       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   1b67a3c8df769       kube-apiserver-old-k8s-version-914579
	fa88625298f23       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   f16ab2917075f       kube-controller-manager-old-k8s-version-914579
	db9a1d56a8be5       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   d1dee2933b6b7       kube-scheduler-old-k8s-version-914579
	a93e658c9c660       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   bb9e727391a00       etcd-old-k8s-version-914579
	066eb4ca58d7e       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   69941cbc11c9c       busybox
	8344283822b37       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   c58a4db5df99a       coredns-74ff55c5b-8qwdj
	765975197bf64       6a23fa8fd2b78       7 minutes ago       Exited              kindnet-cni                 0                   93ba08cc02ad3       kindnet-mn7s7
	d8e9102405c0b       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   57d7700aac764       kube-proxy-h74p7
	ff300c34901ca       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   0623cab670eb0       kube-controller-manager-old-k8s-version-914579
	c2b76e34da1ef       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   b3c04887c5148       kube-scheduler-old-k8s-version-914579
	fcd65a8439964       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   4fd5702632bcc       kube-apiserver-old-k8s-version-914579
	6c7959865023d       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   40a23eb18031e       etcd-old-k8s-version-914579
	
	
	==> containerd <==
	Aug 19 13:49:20 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:20.548436642Z" level=info msg="CreateContainer within sandbox \"5ad7dde35b05da35a13022cb687dfead4e9184b0f3518184c3995bdd43123c1c\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4\""
	Aug 19 13:49:20 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:20.549028866Z" level=info msg="StartContainer for \"7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4\""
	Aug 19 13:49:20 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:20.621699842Z" level=info msg="StartContainer for \"7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4\" returns successfully"
	Aug 19 13:49:20 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:20.646752411Z" level=info msg="shim disconnected" id=7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4 namespace=k8s.io
	Aug 19 13:49:20 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:20.646823402Z" level=warning msg="cleaning up after shim disconnected" id=7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4 namespace=k8s.io
	Aug 19 13:49:20 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:20.646834454Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 13:49:21 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:21.439875089Z" level=info msg="RemoveContainer for \"bb63992e0eeade6822d8519238350fc6c435043d69613dc6c4e76de1967ec718\""
	Aug 19 13:49:21 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:49:21.444768535Z" level=info msg="RemoveContainer for \"bb63992e0eeade6822d8519238350fc6c435043d69613dc6c4e76de1967ec718\" returns successfully"
	Aug 19 13:50:11 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:11.531272557Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:50:11 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:11.536745206Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 19 13:50:11 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:11.538294723Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 19 13:50:11 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:11.538388269Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.532349389Z" level=info msg="CreateContainer within sandbox \"5ad7dde35b05da35a13022cb687dfead4e9184b0f3518184c3995bdd43123c1c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.550389913Z" level=info msg="CreateContainer within sandbox \"5ad7dde35b05da35a13022cb687dfead4e9184b0f3518184c3995bdd43123c1c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd\""
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.551048172Z" level=info msg="StartContainer for \"1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd\""
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.628798508Z" level=info msg="StartContainer for \"1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd\" returns successfully"
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.654496261Z" level=info msg="shim disconnected" id=1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd namespace=k8s.io
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.654561048Z" level=warning msg="cleaning up after shim disconnected" id=1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd namespace=k8s.io
	Aug 19 13:50:49 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:49.654572437Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 13:50:50 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:50.678224331Z" level=info msg="RemoveContainer for \"7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4\""
	Aug 19 13:50:50 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:50:50.686578663Z" level=info msg="RemoveContainer for \"7ce7e0efc1480572df38ef0aa3ca83e2b51a46d1f8ef3c38c6f518096996dfb4\" returns successfully"
	Aug 19 13:52:58 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:52:58.531081073Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:52:58 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:52:58.538301771Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 19 13:52:58 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:52:58.540440690Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 19 13:52:58 old-k8s-version-914579 containerd[569]: time="2024-08-19T13:52:58.540460899Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [2c334d0b02f94acf867081f56ef726d597384f48a9f72e1851695738231ec36d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43201 - 44516 "HINFO IN 7720021647339411727.4504186340590290532. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011119709s
	
	
	==> coredns [8344283822b374d886d00d290f18631a7790271c48c1d318d5b00a1cf12609a5] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34525 - 54311 "HINFO IN 3562076921747927934.6242343775789087393. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011868523s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-914579
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-914579
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=old-k8s-version-914579
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T13_44_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 13:44:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-914579
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 13:52:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 13:47:59 +0000   Mon, 19 Aug 2024 13:44:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 13:47:59 +0000   Mon, 19 Aug 2024 13:44:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 13:47:59 +0000   Mon, 19 Aug 2024 13:44:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 13:47:59 +0000   Mon, 19 Aug 2024 13:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-914579
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 92ae8e36aa874bb3b21c4b727e8f2a62
	  System UUID:                c676933b-07bb-4b7a-af64-79fa59068f2b
	  Boot ID:                    8c9f4b3e-6245-4429-b714-db63b5b637f4
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 coredns-74ff55c5b-8qwdj                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m
	  kube-system                 etcd-old-k8s-version-914579                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m7s
	  kube-system                 kindnet-mn7s7                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m
	  kube-system                 kube-apiserver-old-k8s-version-914579             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-controller-manager-old-k8s-version-914579    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-proxy-h74p7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-scheduler-old-k8s-version-914579             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 metrics-server-9975d5f86-ncd6r                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m33s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-dtszp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-xrfgh               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m27s (x5 over 8m27s)  kubelet     Node old-k8s-version-914579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x5 over 8m27s)  kubelet     Node old-k8s-version-914579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x4 over 8m27s)  kubelet     Node old-k8s-version-914579 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m8s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m8s                   kubelet     Node old-k8s-version-914579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m8s                   kubelet     Node old-k8s-version-914579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s                   kubelet     Node old-k8s-version-914579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m7s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m                     kubelet     Node old-k8s-version-914579 status is now: NodeReady
	  Normal  Starting                 7m59s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-914579 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-914579 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x7 over 6m4s)    kubelet     Node old-k8s-version-914579 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Aug19 12:28] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [6c7959865023d5c1d31e7b8c33d4dca318c0b748bfd18163a76a3658248a339d] <==
	2024-08-19 13:44:35.197188 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2024/08/19 13:44:35 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/08/19 13:44:35 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/08/19 13:44:35 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/08/19 13:44:35 INFO: ea7e25599daad906 became leader at term 2
	raft2024/08/19 13:44:35 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-08-19 13:44:35.364702 I | etcdserver: published {Name:old-k8s-version-914579 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-08-19 13:44:35.365047 I | embed: ready to serve client requests
	2024-08-19 13:44:35.366783 I | embed: ready to serve client requests
	2024-08-19 13:44:35.367488 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-19 13:44:35.373447 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-19 13:44:35.374160 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-19 13:44:35.374811 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-19 13:44:35.385921 I | embed: serving client requests on 192.168.76.2:2379
	2024-08-19 13:44:57.051017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:44:57.891470 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:45:07.889817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:45:17.888797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:45:27.888789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:45:37.892534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:45:47.889565 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:45:57.889109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:46:07.889170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:46:17.888799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:46:27.895041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [a93e658c9c6602282efd1636668d5404651e1df0eaba230af4e0430877ea618d] <==
	2024-08-19 13:48:55.814061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:49:05.814010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:49:15.814183 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:49:25.814019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:49:35.814161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:49:45.814152 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:49:55.814015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:50:05.814096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:50:15.814140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:50:25.813891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:50:35.814421 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:50:45.813984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:50:55.814105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:51:05.814303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:51:15.814191 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:51:25.814142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:51:35.814064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:51:45.814077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:51:55.814168 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:52:05.814083 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:52:15.814087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:52:25.814048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:52:35.814157 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:52:45.814748 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 13:52:55.813967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 13:53:01 up 1 day,  3:35,  0 users,  load average: 0.51, 1.73, 2.57
	Linux old-k8s-version-914579 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [765975197bf640c76b530d4282ed5d13d03238e0ae93cd4aca67241e2f5152e9] <==
	I0819 13:45:24.993133       1 main.go:299] handling current node
	W0819 13:45:26.421864       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:45:26.421897       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 13:45:34.993442       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:45:34.993479       1 main.go:299] handling current node
	W0819 13:45:39.169771       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:45:39.169883       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 13:45:42.798915       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:45:42.799031       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 13:45:44.992792       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:45:44.992834       1 main.go:299] handling current node
	W0819 13:45:47.438178       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:45:47.438211       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 13:45:54.992657       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:45:54.992696       1 main.go:299] handling current node
	I0819 13:46:04.993093       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:46:04.993129       1 main.go:299] handling current node
	I0819 13:46:14.993630       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:46:14.993663       1 main.go:299] handling current node
	W0819 13:46:20.699111       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:46:20.699153       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:46:24.992803       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:46:24.992840       1 main.go:299] handling current node
	W0819 13:46:26.160531       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:46:26.160574       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kindnet [c846487e7ff859548debeb2531a1f0b42651196f23aa0606336373cbd8cc2cb4] <==
	E0819 13:51:51.285318       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:51:52.617439       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:51:52.617478       1 main.go:299] handling current node
	W0819 13:52:02.107232       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:52:02.107266       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 13:52:02.616828       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:52:02.616864       1 main.go:299] handling current node
	I0819 13:52:12.616965       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:52:12.616999       1 main.go:299] handling current node
	I0819 13:52:22.616733       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:52:22.616779       1 main.go:299] handling current node
	W0819 13:52:23.547745       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:52:23.547869       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 13:52:29.287535       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:52:29.287574       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 13:52:32.617562       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:52:32.617605       1 main.go:299] handling current node
	I0819 13:52:42.617281       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:52:42.617319       1 main.go:299] handling current node
	W0819 13:52:42.733348       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 13:52:42.733385       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 13:52:52.617581       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0819 13:52:52.617624       1 main.go:299] handling current node
	W0819 13:52:55.466333       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 13:52:55.466408       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [b779c421112b8e180714c33a49f2622b00391462fc2a2bfb51100d7824fdb234] <==
	I0819 13:49:01.158263       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:49:01.158272       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 13:49:43.561969       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:49:43.562017       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:49:43.562049       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0819 13:50:12.186704       1 handler_proxy.go:102] no RequestInfo found in the context
	E0819 13:50:12.186781       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0819 13:50:12.186796       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:50:25.366718       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:50:25.366770       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:50:25.366779       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 13:51:09.836026       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:51:09.836077       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:51:09.836086       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 13:51:54.822596       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:51:54.822644       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:51:54.822654       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0819 13:52:09.712967       1 handler_proxy.go:102] no RequestInfo found in the context
	E0819 13:52:09.713040       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0819 13:52:09.713049       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 13:52:36.293207       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:52:36.293252       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:52:36.293261       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [fcd65a8439964dae437d73a25791d2c38189fd5f9e340dc4e33ca0cc390524ef] <==
	I0819 13:44:42.730863       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 13:44:42.743485       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0819 13:44:42.748890       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0819 13:44:42.749075       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0819 13:44:43.281668       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 13:44:43.332483       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0819 13:44:43.458825       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0819 13:44:43.461295       1 controller.go:606] quota admission added evaluator for: endpoints
	I0819 13:44:43.465764       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 13:44:44.460089       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0819 13:44:45.326071       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0819 13:44:45.387606       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0819 13:44:53.847398       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 13:45:01.508951       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0819 13:45:01.519895       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0819 13:45:14.668430       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:45:14.668488       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:45:14.668497       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 13:45:53.126727       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:45:53.126773       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:45:53.126782       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 13:46:27.841138       1 client.go:360] parsed scheme: "passthrough"
	I0819 13:46:27.841387       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 13:46:27.841515       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0819 13:46:27.941990       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [fa88625298f23e335aba94d646c993344de4ef6b8da759e9d9f176ae78f9a1fe] <==
	W0819 13:48:33.039401       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:48:59.147131       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:49:04.690048       1 request.go:655] Throttling request took 1.048397344s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0819 13:49:05.554341       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:49:29.649517       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:49:37.205039       1 request.go:655] Throttling request took 1.048229101s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 13:49:38.057618       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:50:00.163669       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:50:09.708223       1 request.go:655] Throttling request took 1.048363375s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0819 13:50:10.559881       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:50:30.681859       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:50:42.210339       1 request.go:655] Throttling request took 1.048333518s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 13:50:43.061838       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:51:01.183830       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:51:14.712384       1 request.go:655] Throttling request took 1.048172222s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 13:51:15.564014       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:51:31.685828       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:51:47.214392       1 request.go:655] Throttling request took 1.047812754s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0819 13:51:48.066295       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:52:02.187655       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:52:19.716725       1 request.go:655] Throttling request took 1.048210526s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0819 13:52:20.568212       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 13:52:32.689542       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 13:52:52.218699       1 request.go:655] Throttling request took 1.048241801s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0819 13:52:53.070382       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [ff300c34901ca29544018036fcfb1d22bafcc0bace0dcf64fe0bd253b66ef58e] <==
	I0819 13:45:01.476840       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0819 13:45:01.488600       1 shared_informer.go:247] Caches are synced for attach detach 
	I0819 13:45:01.496190       1 shared_informer.go:247] Caches are synced for taint 
	I0819 13:45:01.496202       1 shared_informer.go:247] Caches are synced for disruption 
	I0819 13:45:01.496219       1 disruption.go:339] Sending events to api server.
	I0819 13:45:01.496269       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0819 13:45:01.496271       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0819 13:45:01.496338       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-914579. Assuming now as a timestamp.
	I0819 13:45:01.496380       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I0819 13:45:01.496424       1 event.go:291] "Event occurred" object="old-k8s-version-914579" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-914579 event: Registered Node old-k8s-version-914579 in Controller"
	I0819 13:45:01.496436       1 shared_informer.go:247] Caches are synced for deployment 
	I0819 13:45:01.497153       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0819 13:45:01.598194       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0819 13:45:01.598712       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5zwrf"
	I0819 13:45:01.598895       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mn7s7"
	I0819 13:45:01.657765       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0819 13:45:01.672836       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-8qwdj"
	I0819 13:45:01.673958       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h74p7"
	I0819 13:45:01.857874       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0819 13:45:01.946344       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0819 13:45:01.946390       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 13:45:03.166727       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0819 13:45:03.213194       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5zwrf"
	I0819 13:46:27.685087       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0819 13:46:27.802315       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [024882e3d4a5aa678ba55df81b6f10533d9f05f6977090d57dc692926959b303] <==
	I0819 13:47:12.169390       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0819 13:47:12.169479       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0819 13:47:12.191003       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0819 13:47:12.191100       1 server_others.go:185] Using iptables Proxier.
	I0819 13:47:12.191306       1 server.go:650] Version: v1.20.0
	I0819 13:47:12.191877       1 config.go:315] Starting service config controller
	I0819 13:47:12.191886       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0819 13:47:12.207062       1 config.go:224] Starting endpoint slice config controller
	I0819 13:47:12.207098       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0819 13:47:12.292058       1 shared_informer.go:247] Caches are synced for service config 
	I0819 13:47:12.307487       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [d8e9102405c0bfd7286b17e5f2348226ea534b03ab646ed8bc5c514f697bdd28] <==
	I0819 13:45:02.617070       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0819 13:45:02.617352       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0819 13:45:02.639772       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0819 13:45:02.640299       1 server_others.go:185] Using iptables Proxier.
	I0819 13:45:02.640652       1 server.go:650] Version: v1.20.0
	I0819 13:45:02.641469       1 config.go:315] Starting service config controller
	I0819 13:45:02.641615       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0819 13:45:02.641722       1 config.go:224] Starting endpoint slice config controller
	I0819 13:45:02.641808       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0819 13:45:02.746716       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0819 13:45:02.746777       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [c2b76e34da1effdab4751291934d76bf6fae4d64b9a57c2e308028866ca67cc7] <==
	W0819 13:44:41.864748       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:44:41.864753       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:44:41.956754       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0819 13:44:41.961984       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:44:41.962234       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:44:41.962380       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0819 13:44:41.975730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 13:44:41.976179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:44:41.976300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 13:44:41.976382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 13:44:41.976230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 13:44:41.978626       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 13:44:41.979029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 13:44:41.979354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 13:44:41.979586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 13:44:41.979932       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:44:41.982551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:44:41.983944       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 13:44:42.840652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 13:44:42.860000       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 13:44:43.019337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 13:44:43.033636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 13:44:43.034997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 13:44:43.046743       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0819 13:44:43.562536       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [db9a1d56a8be5f2c08c944dc19babddbd855957ea6d3d8e032408575130a610c] <==
	I0819 13:47:03.635518       1 serving.go:331] Generated self-signed cert in-memory
	W0819 13:47:08.459292       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 13:47:08.459323       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 13:47:08.459342       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 13:47:08.459347       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 13:47:08.797468       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0819 13:47:08.809782       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:47:08.810024       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 13:47:08.814072       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0819 13:47:09.019028       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 19 13:51:31 old-k8s-version-914579 kubelet[660]: E0819 13:51:31.530699     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:51:36 old-k8s-version-914579 kubelet[660]: I0819 13:51:36.529792     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:51:36 old-k8s-version-914579 kubelet[660]: E0819 13:51:36.530142     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:51:42 old-k8s-version-914579 kubelet[660]: E0819 13:51:42.530485     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:51:48 old-k8s-version-914579 kubelet[660]: I0819 13:51:48.529793     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:51:48 old-k8s-version-914579 kubelet[660]: E0819 13:51:48.530149     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:51:54 old-k8s-version-914579 kubelet[660]: E0819 13:51:54.530585     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:52:02 old-k8s-version-914579 kubelet[660]: I0819 13:52:02.529963     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:52:02 old-k8s-version-914579 kubelet[660]: E0819 13:52:02.531227     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:52:07 old-k8s-version-914579 kubelet[660]: E0819 13:52:07.531110     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: I0819 13:52:16.529668     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:52:16 old-k8s-version-914579 kubelet[660]: E0819 13:52:16.530020     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:52:22 old-k8s-version-914579 kubelet[660]: E0819 13:52:22.530508     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: I0819 13:52:31.529894     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:52:31 old-k8s-version-914579 kubelet[660]: E0819 13:52:31.532198     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:52:33 old-k8s-version-914579 kubelet[660]: E0819 13:52:33.531083     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: I0819 13:52:45.530907     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532025     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:52:45 old-k8s-version-914579 kubelet[660]: E0819 13:52:45.532731     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 13:52:57 old-k8s-version-914579 kubelet[660]: I0819 13:52:57.530227     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1d8cf51e8e1df44c1906f7991aaa3f767cff269e3c24ff41313cd922abeec2fd
	Aug 19 13:52:57 old-k8s-version-914579 kubelet[660]: E0819 13:52:57.531096     660 pod_workers.go:191] Error syncing pod 092dcf56-3dcc-4679-ab9a-383cd577ebc3 ("dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dtszp_kubernetes-dashboard(092dcf56-3dcc-4679-ab9a-383cd577ebc3)"
	Aug 19 13:52:58 old-k8s-version-914579 kubelet[660]: E0819 13:52:58.540848     660 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 19 13:52:58 old-k8s-version-914579 kubelet[660]: E0819 13:52:58.541264     660 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 19 13:52:58 old-k8s-version-914579 kubelet[660]: E0819 13:52:58.541424     660 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-gngrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba
8-e312-4ded-a04c-d370bd6787a0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 19 13:52:58 old-k8s-version-914579 kubelet[660]: E0819 13:52:58.541477     660 pod_workers.go:191] Error syncing pod a1bd7ba8-e312-4ded-a04c-d370bd6787a0 ("metrics-server-9975d5f86-ncd6r_kube-system(a1bd7ba8-e312-4ded-a04c-d370bd6787a0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [15bd5cc6d84b0d0c4efce828950c59f373f1bc865ae66f1949d2eef2c9a95b75] <==
	2024/08/19 13:47:33 Starting overwatch
	2024/08/19 13:47:33 Using namespace: kubernetes-dashboard
	2024/08/19 13:47:33 Using in-cluster config to connect to apiserver
	2024/08/19 13:47:33 Using secret token for csrf signing
	2024/08/19 13:47:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/19 13:47:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/19 13:47:33 Successful initial request to the apiserver, version: v1.20.0
	2024/08/19 13:47:33 Generating JWE encryption key
	2024/08/19 13:47:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/19 13:47:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/19 13:47:33 Initializing JWE encryption key from synchronized object
	2024/08/19 13:47:33 Creating in-cluster Sidecar client
	2024/08/19 13:47:33 Serving insecurely on HTTP port: 9090
	2024/08/19 13:47:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:48:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:48:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:49:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:49:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:50:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:50:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:51:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:51:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:52:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 13:52:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [13a46bd05c3c5fdc6450ed883a254f38627921b9e47309563f0258e3056dc8fa] <==
	I0819 13:47:12.124911       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 13:47:42.136592       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [65c22339ea6f7f88b4be1592c18662038b27eacd8db0d2a2f924fadc09a4238b] <==
	I0819 13:47:55.648858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 13:47:55.675565       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 13:47:55.675619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 13:48:13.145190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 13:48:13.145614       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-914579_c50876fa-3db7-4a6b-af10-d1714b7aebfa!
	I0819 13:48:13.147540       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31c5f26d-fefe-485e-8a85-8fde3a663c24", APIVersion:"v1", ResourceVersion:"830", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-914579_c50876fa-3db7-4a6b-af10-d1714b7aebfa became leader
	I0819 13:48:13.246455       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-914579_c50876fa-3db7-4a6b-af10-d1714b7aebfa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-914579 -n old-k8s-version-914579
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-914579 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-ncd6r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-914579 describe pod metrics-server-9975d5f86-ncd6r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-914579 describe pod metrics-server-9975d5f86-ncd6r: exit status 1 (190.65589ms)

                                                
                                                
** stderr ** 
	E0819 13:53:03.179066  165494 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0819 13:53:03.197563  165494 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0819 13:53:03.202168  165494 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0819 13:53:03.207879  165494 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0819 13:53:03.217026  165494 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0819 13:53:03.223387  165494 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	Error from server (NotFound): pods "metrics-server-9975d5f86-ncd6r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-914579 describe pod metrics-server-9975d5f86-ncd6r: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.41s)

                                                
                                    

Test pass (296/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.31.0/json-events 6.19
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 216.2
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 15.54
34 TestAddons/parallel/Ingress 20.19
35 TestAddons/parallel/InspektorGadget 11.1
36 TestAddons/parallel/MetricsServer 5.83
39 TestAddons/parallel/CSI 45.22
40 TestAddons/parallel/Headlamp 17.4
41 TestAddons/parallel/CloudSpanner 5.97
42 TestAddons/parallel/LocalPath 8.8
43 TestAddons/parallel/NvidiaDevicePlugin 6.59
44 TestAddons/parallel/Yakd 12.01
45 TestAddons/StoppedEnableDisable 12.39
46 TestCertOptions 41.21
47 TestCertExpiration 231.52
49 TestForceSystemdFlag 51.18
50 TestForceSystemdEnv 42.65
56 TestErrorSpam/setup 33.68
57 TestErrorSpam/start 0.8
58 TestErrorSpam/status 1.11
59 TestErrorSpam/pause 1.84
60 TestErrorSpam/unpause 1.85
61 TestErrorSpam/stop 12.35
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 65.86
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.83
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.43
73 TestFunctional/serial/CacheCmd/cache/add_local 1.48
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 46.54
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.7
84 TestFunctional/serial/LogsFileCmd 1.74
85 TestFunctional/serial/InvalidService 4.97
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 6.52
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 9.6
96 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/SSHCmd 0.69
100 TestFunctional/parallel/CpCmd 2.4
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.62
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
111 TestFunctional/parallel/License 0.28
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.53
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
124 TestFunctional/parallel/ServiceCmd/List 0.5
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
127 TestFunctional/parallel/ServiceCmd/Format 0.4
128 TestFunctional/parallel/ServiceCmd/URL 0.39
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
130 TestFunctional/parallel/ProfileCmd/profile_list 0.4
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
132 TestFunctional/parallel/MountCmd/any-port 6.8
133 TestFunctional/parallel/MountCmd/specific-port 1.74
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.28
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 1.54
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.92
142 TestFunctional/parallel/ImageCommands/Setup 0.66
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 114.84
160 TestMultiControlPlane/serial/DeployApp 41.3
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 24.08
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
165 TestMultiControlPlane/serial/CopyFile 19.21
166 TestMultiControlPlane/serial/StopSecondaryNode 12.97
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.98
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.13
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.36
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
173 TestMultiControlPlane/serial/StopCluster 36.21
174 TestMultiControlPlane/serial/RestartCluster 63.38
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 44.32
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
181 TestJSONOutput/start/Command 49.43
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.96
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.84
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 40.05
207 TestKicCustomNetwork/use_default_bridge_network 34.34
208 TestKicExistingNetwork 32.26
209 TestKicCustomSubnet 38.35
210 TestKicStaticIP 34.55
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 70.4
215 TestMountStart/serial/StartWithMountFirst 7.11
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 7.13
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.66
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.54
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 76.36
227 TestMultiNode/serial/DeployApp2Nodes 16.11
228 TestMultiNode/serial/PingHostFrom2Pods 1.05
229 TestMultiNode/serial/AddNode 18.3
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 10.19
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.41
235 TestMultiNode/serial/RestartKeepsNodes 91.93
236 TestMultiNode/serial/DeleteNode 5.56
237 TestMultiNode/serial/StopMultiNode 24.08
238 TestMultiNode/serial/RestartMultiNode 56.55
239 TestMultiNode/serial/ValidateNameConflict 33.08
244 TestPreload 121.62
246 TestScheduledStopUnix 107.56
249 TestInsufficientStorage 11.29
250 TestRunningBinaryUpgrade 92
252 TestKubernetesUpgrade 102.89
253 TestMissingContainerUpgrade 175.69
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 40.24
257 TestNoKubernetes/serial/StartWithStopK8s 8.9
258 TestNoKubernetes/serial/Start 7.94
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
260 TestNoKubernetes/serial/ProfileList 1
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.03
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.44
264 TestStoppedBinaryUpgrade/Setup 1.34
265 TestStoppedBinaryUpgrade/Upgrade 131.98
274 TestPause/serial/Start 66.27
275 TestStoppedBinaryUpgrade/MinikubeLogs 2.02
276 TestPause/serial/SecondStartNoReconfiguration 7.81
277 TestPause/serial/Pause 0.77
278 TestPause/serial/VerifyStatus 0.32
279 TestPause/serial/Unpause 0.64
280 TestPause/serial/PauseAgain 0.87
281 TestPause/serial/DeletePaused 2.59
282 TestPause/serial/VerifyDeletedResources 7.3
290 TestNetworkPlugins/group/false 5.02
295 TestStartStop/group/old-k8s-version/serial/FirstStart 135.2
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.71
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
298 TestStartStop/group/old-k8s-version/serial/Stop 12.45
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
301 TestStartStop/group/no-preload/serial/FirstStart 80.35
303 TestStartStop/group/no-preload/serial/DeployApp 8.42
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/Stop 12.12
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 267.86
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
311 TestStartStop/group/no-preload/serial/Pause 3.37
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/embed-certs/serial/FirstStart 56.33
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.24
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
317 TestStartStop/group/old-k8s-version/serial/Pause 4.31
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.76
320 TestStartStop/group/embed-certs/serial/DeployApp 8.42
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
322 TestStartStop/group/embed-certs/serial/Stop 12.12
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
325 TestStartStop/group/embed-certs/serial/SecondStart 279.47
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.59
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.4
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.81
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/embed-certs/serial/Pause 3.07
335 TestStartStop/group/newest-cni/serial/FirstStart 39.61
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.36
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.9
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.33
342 TestStartStop/group/newest-cni/serial/Stop 1.39
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
344 TestStartStop/group/newest-cni/serial/SecondStart 20.74
345 TestNetworkPlugins/group/auto/Start 69.39
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
349 TestStartStop/group/newest-cni/serial/Pause 3.75
350 TestNetworkPlugins/group/kindnet/Start 62.61
351 TestNetworkPlugins/group/auto/KubeletFlags 0.28
352 TestNetworkPlugins/group/auto/NetCatPod 8.31
353 TestNetworkPlugins/group/auto/DNS 0.19
354 TestNetworkPlugins/group/auto/Localhost 0.19
355 TestNetworkPlugins/group/auto/HairPin 0.16
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
359 TestNetworkPlugins/group/calico/Start 60.35
360 TestNetworkPlugins/group/kindnet/DNS 0.24
361 TestNetworkPlugins/group/kindnet/Localhost 0.21
362 TestNetworkPlugins/group/kindnet/HairPin 0.22
363 TestNetworkPlugins/group/custom-flannel/Start 59.62
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.42
366 TestNetworkPlugins/group/calico/NetCatPod 11.34
367 TestNetworkPlugins/group/calico/DNS 0.24
368 TestNetworkPlugins/group/calico/Localhost 0.17
369 TestNetworkPlugins/group/calico/HairPin 0.19
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.44
372 TestNetworkPlugins/group/custom-flannel/DNS 0.26
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
375 TestNetworkPlugins/group/enable-default-cni/Start 84.42
376 TestNetworkPlugins/group/flannel/Start 54.39
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
381 TestNetworkPlugins/group/flannel/NetCatPod 9.29
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
383 TestNetworkPlugins/group/flannel/DNS 0.26
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
385 TestNetworkPlugins/group/flannel/Localhost 0.23
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
387 TestNetworkPlugins/group/flannel/HairPin 0.3
388 TestNetworkPlugins/group/bridge/Start 69.15
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 0.19
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (7.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-106115 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-106115 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.039432634s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-106115
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-106115: exit status 85 (85.168437ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-106115 | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC |          |
	|         | -p download-only-106115        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:55:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:55:49.729325 4146552 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:55:49.729455 4146552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:55:49.729466 4146552 out.go:358] Setting ErrFile to fd 2...
	I0819 12:55:49.729471 4146552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:55:49.729711 4146552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	W0819 12:55:49.729867 4146552 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19479-4141166/.minikube/config/config.json: open /home/jenkins/minikube-integration/19479-4141166/.minikube/config/config.json: no such file or directory
	I0819 12:55:49.730285 4146552 out.go:352] Setting JSON to true
	I0819 12:55:49.731288 4146552 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":95894,"bootTime":1723976256,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 12:55:49.731364 4146552 start.go:139] virtualization:  
	I0819 12:55:49.734147 4146552 out.go:97] [download-only-106115] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0819 12:55:49.734356 4146552 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 12:55:49.734420 4146552 notify.go:220] Checking for updates...
	I0819 12:55:49.735660 4146552 out.go:169] MINIKUBE_LOCATION=19479
	I0819 12:55:49.737144 4146552 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:55:49.739125 4146552 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 12:55:49.740691 4146552 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 12:55:49.742118 4146552 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 12:55:49.744501 4146552 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 12:55:49.744746 4146552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:55:49.766648 4146552 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:55:49.766748 4146552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:55:49.834750 4146552 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 12:55:49.824663642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:55:49.834868 4146552 docker.go:307] overlay module found
	I0819 12:55:49.836209 4146552 out.go:97] Using the docker driver based on user configuration
	I0819 12:55:49.836239 4146552 start.go:297] selected driver: docker
	I0819 12:55:49.836247 4146552 start.go:901] validating driver "docker" against <nil>
	I0819 12:55:49.836361 4146552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:55:49.892725 4146552 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 12:55:49.882928182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:55:49.892895 4146552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:55:49.893194 4146552 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 12:55:49.893373 4146552 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 12:55:49.895023 4146552 out.go:169] Using Docker driver with root privileges
	I0819 12:55:49.896520 4146552 cni.go:84] Creating CNI manager for ""
	I0819 12:55:49.896541 4146552 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:55:49.896553 4146552 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 12:55:49.896632 4146552 start.go:340] cluster config:
	{Name:download-only-106115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-106115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:55:49.898432 4146552 out.go:97] Starting "download-only-106115" primary control-plane node in "download-only-106115" cluster
	I0819 12:55:49.898452 4146552 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 12:55:49.900112 4146552 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 12:55:49.900139 4146552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 12:55:49.900308 4146552 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 12:55:49.915097 4146552 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 12:55:49.915730 4146552 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 12:55:49.915863 4146552 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 12:55:50.045115 4146552 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 12:55:50.045153 4146552 cache.go:56] Caching tarball of preloaded images
	I0819 12:55:50.045324 4146552 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 12:55:50.047697 4146552 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 12:55:50.047731 4146552 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 12:55:50.171188 4146552 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 12:55:53.886931 4146552 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-106115 host does not exist
	  To start a cluster, run: "minikube start -p download-only-106115"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-106115
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-072642 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-072642 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.189483343s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-072642
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-072642: exit status 85 (62.131955ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-106115 | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC |                     |
	|         | -p download-only-106115        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:55 UTC |
	| delete  | -p download-only-106115        | download-only-106115 | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC | 19 Aug 24 12:55 UTC |
	| start   | -o=json --download-only        | download-only-072642 | jenkins | v1.33.1 | 19 Aug 24 12:55 UTC |                     |
	|         | -p download-only-072642        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:55:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:55:57.214807 4146757 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:55:57.214956 4146757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:55:57.214968 4146757 out.go:358] Setting ErrFile to fd 2...
	I0819 12:55:57.214973 4146757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:55:57.215200 4146757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 12:55:57.215604 4146757 out.go:352] Setting JSON to true
	I0819 12:55:57.216543 4146757 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":95901,"bootTime":1723976256,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 12:55:57.216626 4146757 start.go:139] virtualization:  
	I0819 12:55:57.218409 4146757 out.go:97] [download-only-072642] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 12:55:57.218652 4146757 notify.go:220] Checking for updates...
	I0819 12:55:57.219913 4146757 out.go:169] MINIKUBE_LOCATION=19479
	I0819 12:55:57.221528 4146757 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:55:57.222649 4146757 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 12:55:57.223815 4146757 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 12:55:57.224822 4146757 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 12:55:57.227441 4146757 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 12:55:57.227704 4146757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:55:57.251251 4146757 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:55:57.251357 4146757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:55:57.319168 4146757 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 12:55:57.309211655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:55:57.319282 4146757 docker.go:307] overlay module found
	I0819 12:55:57.320640 4146757 out.go:97] Using the docker driver based on user configuration
	I0819 12:55:57.320664 4146757 start.go:297] selected driver: docker
	I0819 12:55:57.320670 4146757 start.go:901] validating driver "docker" against <nil>
	I0819 12:55:57.320773 4146757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:55:57.379562 4146757 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 12:55:57.370366182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:55:57.379732 4146757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 12:55:57.380047 4146757 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 12:55:57.380206 4146757 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 12:55:57.381663 4146757 out.go:169] Using Docker driver with root privileges
	I0819 12:55:57.382768 4146757 cni.go:84] Creating CNI manager for ""
	I0819 12:55:57.382791 4146757 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:55:57.382800 4146757 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 12:55:57.382880 4146757 start.go:340] cluster config:
	{Name:download-only-072642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-072642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:55:57.384385 4146757 out.go:97] Starting "download-only-072642" primary control-plane node in "download-only-072642" cluster
	I0819 12:55:57.384404 4146757 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 12:55:57.385543 4146757 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 12:55:57.385567 4146757 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:55:57.385751 4146757 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 12:55:57.400459 4146757 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 12:55:57.400601 4146757 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 12:55:57.400627 4146757 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 12:55:57.400640 4146757 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 12:55:57.400649 4146757 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 12:55:57.509645 4146757 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 12:55:57.509676 4146757 cache.go:56] Caching tarball of preloaded images
	I0819 12:55:57.509857 4146757 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:55:57.511347 4146757 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 12:55:57.511377 4146757 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 12:55:57.628903 4146757 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 12:56:01.767307 4146757 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 12:56:01.767421 4146757 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19479-4141166/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-072642 host does not exist
	  To start a cluster, run: "minikube start -p download-only-072642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-072642
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-234697 --alsologtostderr --binary-mirror http://127.0.0.1:44223 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-234697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-234697
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-789485
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-789485: exit status 85 (71.713107ms)

                                                
                                                
-- stdout --
	* Profile "addons-789485" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-789485"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-789485
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-789485: exit status 85 (70.729911ms)

                                                
                                                
-- stdout --
	* Profile "addons-789485" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-789485"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (216.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-789485 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-789485 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m36.194241428s)
--- PASS: TestAddons/Setup (216.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-789485 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-789485 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.837093ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-gtgx5" [d1858f78-2020-413b-b3d6-e5957d671bc6] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003996982s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2kb7c" [19d924e8-1aab-4715-8b17-070f1796dd3f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00464144s
addons_test.go:342: (dbg) Run:  kubectl --context addons-789485 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-789485 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-789485 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.526149493s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 ip
2024/08/19 13:03:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-789485 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-789485 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-789485 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [28e80626-bdda-46cd-917a-4675a40406d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [28e80626-bdda-46cd-917a-4675a40406d7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003625994s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-789485 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 addons disable ingress-dns --alsologtostderr -v=1: (1.137910995s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 addons disable ingress --alsologtostderr -v=1: (7.854101444s)
--- PASS: TestAddons/parallel/Ingress (20.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f7455" [fca70167-defc-4dab-b45b-9e0e93156cfd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006247081s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-789485
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-789485: (6.095963171s)
--- PASS: TestAddons/parallel/InspektorGadget (11.10s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.297689ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-7576p" [d98f94a6-9145-451d-9c58-60ff2d0a603d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004053894s
addons_test.go:417: (dbg) Run:  kubectl --context addons-789485 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.550741ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-789485 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-789485 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [30293ccc-636c-45d8-a246-8ebd2350524e] Pending
helpers_test.go:344: "task-pv-pod" [30293ccc-636c-45d8-a246-8ebd2350524e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [30293ccc-636c-45d8-a246-8ebd2350524e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.011637696s
addons_test.go:590: (dbg) Run:  kubectl --context addons-789485 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-789485 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-789485 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-789485 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-789485 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-789485 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-789485 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6afc0d20-a20f-48ae-a61b-8f204dd880ae] Pending
helpers_test.go:344: "task-pv-pod-restore" [6afc0d20-a20f-48ae-a61b-8f204dd880ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6afc0d20-a20f-48ae-a61b-8f204dd880ae] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004680659s
addons_test.go:632: (dbg) Run:  kubectl --context addons-789485 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-789485 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-789485 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.940238323s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 addons disable volumesnapshots --alsologtostderr -v=1: (1.256874431s)
--- PASS: TestAddons/parallel/CSI (45.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-789485 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-789485 --alsologtostderr -v=1: (1.492372855s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-cvfvd" [0690094e-cab3-4940-b639-80f54761be1d] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-cvfvd" [0690094e-cab3-4940-b639-80f54761be1d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-cvfvd" [0690094e-cab3-4940-b639-80f54761be1d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003997063s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 addons disable headlamp --alsologtostderr -v=1: (5.897804107s)
--- PASS: TestAddons/parallel/Headlamp (17.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-hvxxd" [d37f727d-c5a2-43ad-bf84-1fda46d36343] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005264909s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-789485
--- PASS: TestAddons/parallel/CloudSpanner (5.97s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-789485 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-789485 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-789485 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [398d9670-2a0d-435f-8b16-9f3c6bae9227] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [398d9670-2a0d-435f-8b16-9f3c6bae9227] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [398d9670-2a0d-435f-8b16-9f3c6bae9227] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003946228s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-789485 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 ssh "cat /opt/local-path-provisioner/pvc-503ce065-8e4f-4367-8f25-861351c7bcf5_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-789485 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-789485 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2l8sd" [70892ae8-95d3-48c7-b918-33a39d71c08b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003084073s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-789485
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-g77xw" [85881b69-62c7-423a-b12e-476a57bc2536] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00437476s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-789485 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-789485 addons disable yakd --alsologtostderr -v=1: (6.004879293s)
--- PASS: TestAddons/parallel/Yakd (12.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-789485
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-789485: (12.106476707s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-789485
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-789485
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-789485
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (41.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-854184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-854184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (37.983577865s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-854184 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-854184 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-854184 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-854184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-854184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-854184: (2.066661405s)
--- PASS: TestCertOptions (41.21s)

                                                
                                    
x
+
TestCertExpiration (231.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-072717 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-072717 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.637961482s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-072717 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-072717 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.421316859s)
helpers_test.go:175: Cleaning up "cert-expiration-072717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-072717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-072717: (2.462939531s)
--- PASS: TestCertExpiration (231.52s)

                                                
                                    
x
+
TestForceSystemdFlag (51.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-705940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-705940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.431565797s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-705940 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-705940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-705940
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-705940: (2.311508005s)
--- PASS: TestForceSystemdFlag (51.18s)

                                                
                                    
x
+
TestForceSystemdEnv (42.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-896390 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-896390 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.697852295s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-896390 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-896390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-896390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-896390: (2.478511107s)
--- PASS: TestForceSystemdEnv (42.65s)

                                                
                                    
x
+
TestErrorSpam/setup (33.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-382954 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-382954 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-382954 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-382954 --driver=docker  --container-runtime=containerd: (33.681214835s)
--- PASS: TestErrorSpam/setup (33.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (12.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 stop: (12.138796696s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-382954 --log_dir /tmp/nospam-382954 stop
--- PASS: TestErrorSpam/stop (12.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19479-4141166/.minikube/files/etc/test/nested/copy/4146547/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-893834 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-893834 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m5.853736381s)
--- PASS: TestFunctional/serial/StartWithProxy (65.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-893834 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-893834 --alsologtostderr -v=8: (6.830278249s)
functional_test.go:663: soft start took 6.831395051s for "functional-893834" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-893834 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 cache add registry.k8s.io/pause:3.1: (1.551758676s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 cache add registry.k8s.io/pause:3.3: (1.652086124s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 cache add registry.k8s.io/pause:latest: (1.229867649s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-893834 /tmp/TestFunctionalserialCacheCmdcacheadd_local4135309866/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cache add minikube-local-cache-test:functional-893834
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cache delete minikube-local-cache-test:functional-893834
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-893834
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.257586ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 cache reload: (1.085264809s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 kubectl -- --context functional-893834 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-893834 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-893834 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-893834 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.537454826s)
functional_test.go:761: restart took 46.537572954s for "functional-893834" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-893834 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 logs: (1.701994595s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 logs --file /tmp/TestFunctionalserialLogsFileCmd1768445862/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 logs --file /tmp/TestFunctionalserialLogsFileCmd1768445862/001/logs.txt: (1.735657051s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-893834 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-893834
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-893834: exit status 115 (931.962526ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30147 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-893834 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 config get cpus: exit status 14 (79.578578ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 config get cpus: exit status 14 (71.191725ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-893834 --alsologtostderr -v=1]
E0819 13:09:46.575933 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
2024/08/19 13:09:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-893834 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4181225: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-893834 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-893834 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (201.868538ms)

                                                
                                                
-- stdout --
	* [functional-893834] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:09:43.701762 4180983 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:09:43.701935 4180983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:09:43.701956 4180983 out.go:358] Setting ErrFile to fd 2...
	I0819 13:09:43.701976 4180983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:09:43.702240 4180983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:09:43.702639 4180983 out.go:352] Setting JSON to false
	I0819 13:09:43.703691 4180983 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":96728,"bootTime":1723976256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:09:43.703842 4180983 start.go:139] virtualization:  
	I0819 13:09:43.707216 4180983 out.go:177] * [functional-893834] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 13:09:43.710639 4180983 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:09:43.710898 4180983 notify.go:220] Checking for updates...
	I0819 13:09:43.717147 4180983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:09:43.725711 4180983 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:09:43.728371 4180983 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:09:43.730856 4180983 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:09:43.733634 4180983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:09:43.736732 4180983 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:09:43.737345 4180983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:09:43.773375 4180983 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:09:43.773503 4180983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:09:43.834161 4180983 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 13:09:43.823702144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:09:43.834268 4180983 docker.go:307] overlay module found
	I0819 13:09:43.837201 4180983 out.go:177] * Using the docker driver based on existing profile
	I0819 13:09:43.839930 4180983 start.go:297] selected driver: docker
	I0819 13:09:43.839955 4180983 start.go:901] validating driver "docker" against &{Name:functional-893834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-893834 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:09:43.840082 4180983 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:09:43.843175 4180983 out.go:201] 
	W0819 13:09:43.845882 4180983 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 13:09:43.848456 4180983 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-893834 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0819 13:09:44.014132 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-893834 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-893834 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.183024ms)

                                                
                                                
-- stdout --
	* [functional-893834] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:09:43.497414 4180939 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:09:43.497565 4180939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:09:43.497577 4180939 out.go:358] Setting ErrFile to fd 2...
	I0819 13:09:43.497583 4180939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:09:43.498400 4180939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:09:43.498848 4180939 out.go:352] Setting JSON to false
	I0819 13:09:43.499977 4180939 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":96727,"bootTime":1723976256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:09:43.500079 4180939 start.go:139] virtualization:  
	I0819 13:09:43.503908 4180939 out.go:177] * [functional-893834] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 13:09:43.507490 4180939 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:09:43.507526 4180939 notify.go:220] Checking for updates...
	I0819 13:09:43.513296 4180939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:09:43.515873 4180939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:09:43.518469 4180939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:09:43.521017 4180939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:09:43.523861 4180939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:09:43.527124 4180939 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:09:43.527679 4180939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:09:43.557450 4180939 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:09:43.557569 4180939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:09:43.632333 4180939 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 13:09:43.619894106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:09:43.632462 4180939 docker.go:307] overlay module found
	I0819 13:09:43.635364 4180939 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 13:09:43.638001 4180939 start.go:297] selected driver: docker
	I0819 13:09:43.638023 4180939 start.go:901] validating driver "docker" against &{Name:functional-893834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-893834 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 13:09:43.638142 4180939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:09:43.641266 4180939 out.go:201] 
	W0819 13:09:43.644128 4180939 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 13:09:43.646802 4180939 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 status
E0819 13:09:42.732725 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-893834 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-893834 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-m5dq8" [cc884f42-587a-431a-9b51-9ef4c0b85f8c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-m5dq8" [cc884f42-587a-431a-9b51-9ef4c0b85f8c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.00412768s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30827
functional_test.go:1675: http://192.168.49.2:30827: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-m5dq8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30827
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh -n functional-893834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cp functional-893834:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3266978371/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh -n functional-893834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh -n functional-893834 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/4146547/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /etc/test/nested/copy/4146547/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/4146547.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /etc/ssl/certs/4146547.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/4146547.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /usr/share/ca-certificates/4146547.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/41465472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /etc/ssl/certs/41465472.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/41465472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /usr/share/ca-certificates/41465472.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-893834 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh "sudo systemctl is-active docker": exit status 1 (271.128638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh "sudo systemctl is-active crio": exit status 1 (259.331075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-893834 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-893834 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-893834 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4177937: os: process already finished
helpers_test.go:502: unable to terminate pid 4177763: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-893834 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-893834 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-893834 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6bc28cdf-1ed4-4f36-97ea-adf4de90f5f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6bc28cdf-1ed4-4f36-97ea-adf4de90f5f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004258526s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-893834 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.214.129 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-893834 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-893834 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-893834 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-svk74" [7c555b9c-3ed1-469f-b508-4168d4941fdd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-svk74" [7c555b9c-3ed1-469f-b508-4168d4941fdd] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003292066s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 service list -o json
functional_test.go:1494: Took "571.275219ms" to run "out/minikube-linux-arm64 -p functional-893834 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31453
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31453
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "340.603137ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.280622ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "329.923843ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "56.970746ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdany-port279734441/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724072972591978061" to /tmp/TestFunctionalparallelMountCmdany-port279734441/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724072972591978061" to /tmp/TestFunctionalparallelMountCmdany-port279734441/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724072972591978061" to /tmp/TestFunctionalparallelMountCmdany-port279734441/001/test-1724072972591978061
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.078828ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 13:09 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 13:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 13:09 test-1724072972591978061
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh cat /mount-9p/test-1724072972591978061
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-893834 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [710785ab-0087-4ec7-97b9-6fc7a6e30130] Pending
helpers_test.go:344: "busybox-mount" [710785ab-0087-4ec7-97b9-6fc7a6e30130] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [710785ab-0087-4ec7-97b9-6fc7a6e30130] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [710785ab-0087-4ec7-97b9-6fc7a6e30130] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003542641s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-893834 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdany-port279734441/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdspecific-port3515101320/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.013423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdspecific-port3515101320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh "sudo umount -f /mount-9p": exit status 1 (263.220106ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-893834 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdspecific-port3515101320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup553727649/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup553727649/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup553727649/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T" /mount1
E0819 13:09:41.427938 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:09:41.442193 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:09:41.454373 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:09:41.479930 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:09:41.521606 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:09:41.602972 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:09:41.767639 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh "findmnt -T" /mount3
E0819 13:09:42.091098 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-893834 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup553727649/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup553727649/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-893834 /tmp/TestFunctionalparallelMountCmdVerifyCleanup553727649/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 version -o=json --components: (1.535094475s)
--- PASS: TestFunctional/parallel/Version/components (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-893834 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-893834
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-893834
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-893834 image ls --format short --alsologtostderr:
I0819 13:10:01.177642 4182682 out.go:345] Setting OutFile to fd 1 ...
I0819 13:10:01.177857 4182682 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:01.177872 4182682 out.go:358] Setting ErrFile to fd 2...
I0819 13:10:01.177878 4182682 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:01.178193 4182682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
I0819 13:10:01.179012 4182682 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:01.179181 4182682 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:01.180070 4182682 cli_runner.go:164] Run: docker container inspect functional-893834 --format={{.State.Status}}
I0819 13:10:01.198298 4182682 ssh_runner.go:195] Run: systemctl --version
I0819 13:10:01.198366 4182682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893834
I0819 13:10:01.217299 4182682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38275 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/functional-893834/id_rsa Username:docker}
I0819 13:10:01.308494 4182682 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-893834 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| docker.io/library/minikube-local-cache-test | functional-893834  | sha256:e344fd | 992B   |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-893834  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| localhost/my-image                          | functional-893834  | sha256:bd18ad | 831kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-893834 image ls --format table --alsologtostderr:
I0819 13:10:04.795400 4183005 out.go:345] Setting OutFile to fd 1 ...
I0819 13:10:04.795599 4183005 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:04.795626 4183005 out.go:358] Setting ErrFile to fd 2...
I0819 13:10:04.795646 4183005 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:04.795955 4183005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
I0819 13:10:04.796654 4183005 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:04.796839 4183005 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:04.797374 4183005 cli_runner.go:164] Run: docker container inspect functional-893834 --format={{.State.Status}}
I0819 13:10:04.815519 4183005 ssh_runner.go:195] Run: systemctl --version
I0819 13:10:04.815586 4183005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893834
I0819 13:10:04.833630 4183005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38275 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/functional-893834/id_rsa Username:docker}
I0819 13:10:04.924593 4183005 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-893834 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:bd18ad283316efc2a60f11d2f5336471ce303d430ed54006b371a1d021f23514","repoDigests":[],"repoTags":["localhost/my-image:functional-893834"],"size":"830617"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-893834"],"size":"2173567"},{"
id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:e344fd7653e76a65bf1ea63c29b61c4f3cdee2311247409076438fb4aa5b9970","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-893834"],"size":"992"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d14
18e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e569
9057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af
6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-893834 image ls --format json --alsologtostderr:
I0819 13:10:04.561280 4182974 out.go:345] Setting OutFile to fd 1 ...
I0819 13:10:04.561493 4182974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:04.561507 4182974 out.go:358] Setting ErrFile to fd 2...
I0819 13:10:04.561514 4182974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:04.561805 4182974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
I0819 13:10:04.562542 4182974 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:04.562719 4182974 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:04.563281 4182974 cli_runner.go:164] Run: docker container inspect functional-893834 --format={{.State.Status}}
I0819 13:10:04.580780 4182974 ssh_runner.go:195] Run: systemctl --version
I0819 13:10:04.580837 4182974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893834
I0819 13:10:04.600034 4182974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38275 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/functional-893834/id_rsa Username:docker}
I0819 13:10:04.696921 4182974 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-893834 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:e344fd7653e76a65bf1ea63c29b61c4f3cdee2311247409076438fb4aa5b9970
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-893834
size: "992"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-893834
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-893834 image ls --format yaml --alsologtostderr:
I0819 13:10:01.412436 4182712 out.go:345] Setting OutFile to fd 1 ...
I0819 13:10:01.412668 4182712 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:01.412682 4182712 out.go:358] Setting ErrFile to fd 2...
I0819 13:10:01.412687 4182712 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:01.412964 4182712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
I0819 13:10:01.413673 4182712 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:01.413847 4182712 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:01.414386 4182712 cli_runner.go:164] Run: docker container inspect functional-893834 --format={{.State.Status}}
I0819 13:10:01.432065 4182712 ssh_runner.go:195] Run: systemctl --version
I0819 13:10:01.432118 4182712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893834
I0819 13:10:01.456273 4182712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38275 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/functional-893834/id_rsa Username:docker}
I0819 13:10:01.548656 4182712 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-893834 ssh pgrep buildkitd: exit status 1 (269.937256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image build -t localhost/my-image:functional-893834 testdata/build --alsologtostderr
E0819 13:10:01.939243 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-893834 image build -t localhost/my-image:functional-893834 testdata/build --alsologtostderr: (2.405042639s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-893834 image build -t localhost/my-image:functional-893834 testdata/build --alsologtostderr:
I0819 13:10:01.914028 4182800 out.go:345] Setting OutFile to fd 1 ...
I0819 13:10:01.915050 4182800 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:01.915065 4182800 out.go:358] Setting ErrFile to fd 2...
I0819 13:10:01.915071 4182800 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 13:10:01.915360 4182800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
I0819 13:10:01.916179 4182800 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:01.917649 4182800 config.go:182] Loaded profile config "functional-893834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 13:10:01.918204 4182800 cli_runner.go:164] Run: docker container inspect functional-893834 --format={{.State.Status}}
I0819 13:10:01.937845 4182800 ssh_runner.go:195] Run: systemctl --version
I0819 13:10:01.937907 4182800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-893834
I0819 13:10:01.956360 4182800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38275 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/functional-893834/id_rsa Username:docker}
I0819 13:10:02.053059 4182800 build_images.go:161] Building image from path: /tmp/build.325642809.tar
I0819 13:10:02.053215 4182800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 13:10:02.063841 4182800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.325642809.tar
I0819 13:10:02.067863 4182800 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.325642809.tar: stat -c "%s %y" /var/lib/minikube/build/build.325642809.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.325642809.tar': No such file or directory
I0819 13:10:02.067896 4182800 ssh_runner.go:362] scp /tmp/build.325642809.tar --> /var/lib/minikube/build/build.325642809.tar (3072 bytes)
I0819 13:10:02.094224 4182800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.325642809
I0819 13:10:02.104681 4182800 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.325642809 -xf /var/lib/minikube/build/build.325642809.tar
I0819 13:10:02.114898 4182800 containerd.go:394] Building image: /var/lib/minikube/build/build.325642809
I0819 13:10:02.115013 4182800 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.325642809 --local dockerfile=/var/lib/minikube/build/build.325642809 --output type=image,name=localhost/my-image:functional-893834
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.1s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:57e58bd8bddad01091265e77bb778bcf16965ff3efddba6ed5dda4487c2226ab done
#8 exporting config sha256:bd18ad283316efc2a60f11d2f5336471ce303d430ed54006b371a1d021f23514 done
#8 naming to localhost/my-image:functional-893834 0.0s done
#8 DONE 0.1s
I0819 13:10:04.244224 4182800 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.325642809 --local dockerfile=/var/lib/minikube/build/build.325642809 --output type=image,name=localhost/my-image:functional-893834: (2.129175404s)
I0819 13:10:04.244327 4182800 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.325642809
I0819 13:10:04.254280 4182800 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.325642809.tar
I0819 13:10:04.263991 4182800 build_images.go:217] Built localhost/my-image:functional-893834 from /tmp/build.325642809.tar
I0819 13:10:04.264027 4182800 build_images.go:133] succeeded building to: functional-893834
I0819 13:10:04.264034 4182800 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
E0819 13:09:51.697676 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-893834
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image load --daemon kicbase/echo-server:functional-893834 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image load --daemon kicbase/echo-server:functional-893834 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-893834
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image load --daemon kicbase/echo-server:functional-893834 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image save kicbase/echo-server:functional-893834 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image rm kicbase/echo-server:functional-893834 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-893834
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 image save --daemon kicbase/echo-server:functional-893834 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-893834
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-893834 update-context --alsologtostderr -v=2
E0819 13:10:22.420566 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:11:03.382069 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-893834
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-893834
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-893834
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-157720 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 13:12:25.304276 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.464538 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.470862 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.482236 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.503704 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.545076 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.626519 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:03.788016 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:04.110202 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:04.751660 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:06.033961 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:08.595909 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-157720 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m53.982143937s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (114.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (41.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- rollout status deployment/busybox
E0819 13:14:13.717933 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:23.959283 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:41.428933 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:14:44.440864 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-157720 -- rollout status deployment/busybox: (38.093792119s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-287vz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-2rw8n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-r6zvq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-287vz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-2rw8n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-r6zvq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-287vz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-2rw8n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-r6zvq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (41.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-287vz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-287vz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-2rw8n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-2rw8n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-r6zvq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-157720 -- exec busybox-7dff88458-r6zvq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-157720 -v=7 --alsologtostderr
E0819 13:15:09.146189 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-157720 -v=7 --alsologtostderr: (23.052317459s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr: (1.022829225s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-157720 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp testdata/cp-test.txt ha-157720:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile530244777/001/cp-test_ha-157720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720:/home/docker/cp-test.txt ha-157720-m02:/home/docker/cp-test_ha-157720_ha-157720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test_ha-157720_ha-157720-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720:/home/docker/cp-test.txt ha-157720-m03:/home/docker/cp-test_ha-157720_ha-157720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test_ha-157720_ha-157720-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720:/home/docker/cp-test.txt ha-157720-m04:/home/docker/cp-test_ha-157720_ha-157720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test_ha-157720_ha-157720-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp testdata/cp-test.txt ha-157720-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile530244777/001/cp-test_ha-157720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m02:/home/docker/cp-test.txt ha-157720:/home/docker/cp-test_ha-157720-m02_ha-157720.txt
E0819 13:15:25.402588 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test_ha-157720-m02_ha-157720.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m02:/home/docker/cp-test.txt ha-157720-m03:/home/docker/cp-test_ha-157720-m02_ha-157720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test_ha-157720-m02_ha-157720-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m02:/home/docker/cp-test.txt ha-157720-m04:/home/docker/cp-test_ha-157720-m02_ha-157720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test_ha-157720-m02_ha-157720-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp testdata/cp-test.txt ha-157720-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile530244777/001/cp-test_ha-157720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m03:/home/docker/cp-test.txt ha-157720:/home/docker/cp-test_ha-157720-m03_ha-157720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test_ha-157720-m03_ha-157720.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m03:/home/docker/cp-test.txt ha-157720-m02:/home/docker/cp-test_ha-157720-m03_ha-157720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test_ha-157720-m03_ha-157720-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m03:/home/docker/cp-test.txt ha-157720-m04:/home/docker/cp-test_ha-157720-m03_ha-157720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test_ha-157720-m03_ha-157720-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp testdata/cp-test.txt ha-157720-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile530244777/001/cp-test_ha-157720-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m04:/home/docker/cp-test.txt ha-157720:/home/docker/cp-test_ha-157720-m04_ha-157720.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720 "sudo cat /home/docker/cp-test_ha-157720-m04_ha-157720.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m04:/home/docker/cp-test.txt ha-157720-m02:/home/docker/cp-test_ha-157720-m04_ha-157720-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m02 "sudo cat /home/docker/cp-test_ha-157720-m04_ha-157720-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 cp ha-157720-m04:/home/docker/cp-test.txt ha-157720-m03:/home/docker/cp-test_ha-157720-m04_ha-157720-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 ssh -n ha-157720-m03 "sudo cat /home/docker/cp-test_ha-157720-m04_ha-157720-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 node stop m02 -v=7 --alsologtostderr: (12.123446259s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr: exit status 7 (842.678175ms)

                                                
                                                
-- stdout --
	ha-157720
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-157720-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-157720-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-157720-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:15:49.752251    5844 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:15:49.752426    5844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:15:49.752456    5844 out.go:358] Setting ErrFile to fd 2...
	I0819 13:15:49.752476    5844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:15:49.752761    5844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:15:49.752967    5844 out.go:352] Setting JSON to false
	I0819 13:15:49.753035    5844 mustload.go:65] Loading cluster: ha-157720
	I0819 13:15:49.753146    5844 notify.go:220] Checking for updates...
	I0819 13:15:49.753488    5844 config.go:182] Loaded profile config "ha-157720": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:15:49.753520    5844 status.go:255] checking status of ha-157720 ...
	I0819 13:15:49.754099    5844 cli_runner.go:164] Run: docker container inspect ha-157720 --format={{.State.Status}}
	I0819 13:15:49.774127    5844 status.go:330] ha-157720 host status = "Running" (err=<nil>)
	I0819 13:15:49.774150    5844 host.go:66] Checking if "ha-157720" exists ...
	I0819 13:15:49.774470    5844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-157720
	I0819 13:15:49.798627    5844 host.go:66] Checking if "ha-157720" exists ...
	I0819 13:15:49.799115    5844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:15:49.799178    5844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-157720
	I0819 13:15:49.820094    5844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38280 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/ha-157720/id_rsa Username:docker}
	I0819 13:15:49.917269    5844 ssh_runner.go:195] Run: systemctl --version
	I0819 13:15:49.922015    5844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:15:49.951152    5844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:15:50.040287    5844 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 13:15:50.026690584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:15:50.040958    5844 kubeconfig.go:125] found "ha-157720" server: "https://192.168.49.254:8443"
	I0819 13:15:50.040998    5844 api_server.go:166] Checking apiserver status ...
	I0819 13:15:50.041053    5844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:15:50.066902    5844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup
	I0819 13:15:50.079707    5844 api_server.go:182] apiserver freezer: "11:freezer:/docker/d8e525a49d1f0d24cfb32e7b24ad90c79f860450b372fa0f1db20fb7cbca000a/kubepods/burstable/pod9124231fa3425c7871d80198e11fdc7b/b947ad1746e5e024dd07b70ca6bd11a7bc38480f8adf20af29f22bf79ebd3916"
	I0819 13:15:50.079837    5844 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d8e525a49d1f0d24cfb32e7b24ad90c79f860450b372fa0f1db20fb7cbca000a/kubepods/burstable/pod9124231fa3425c7871d80198e11fdc7b/b947ad1746e5e024dd07b70ca6bd11a7bc38480f8adf20af29f22bf79ebd3916/freezer.state
	I0819 13:15:50.094699    5844 api_server.go:204] freezer state: "THAWED"
	I0819 13:15:50.094753    5844 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 13:15:50.103057    5844 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 13:15:50.103093    5844 status.go:422] ha-157720 apiserver status = Running (err=<nil>)
	I0819 13:15:50.103105    5844 status.go:257] ha-157720 status: &{Name:ha-157720 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:15:50.103124    5844 status.go:255] checking status of ha-157720-m02 ...
	I0819 13:15:50.103484    5844 cli_runner.go:164] Run: docker container inspect ha-157720-m02 --format={{.State.Status}}
	I0819 13:15:50.123908    5844 status.go:330] ha-157720-m02 host status = "Stopped" (err=<nil>)
	I0819 13:15:50.123932    5844 status.go:343] host is not running, skipping remaining checks
	I0819 13:15:50.123940    5844 status.go:257] ha-157720-m02 status: &{Name:ha-157720-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:15:50.123960    5844 status.go:255] checking status of ha-157720-m03 ...
	I0819 13:15:50.124393    5844 cli_runner.go:164] Run: docker container inspect ha-157720-m03 --format={{.State.Status}}
	I0819 13:15:50.152515    5844 status.go:330] ha-157720-m03 host status = "Running" (err=<nil>)
	I0819 13:15:50.152726    5844 host.go:66] Checking if "ha-157720-m03" exists ...
	I0819 13:15:50.153079    5844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-157720-m03
	I0819 13:15:50.171021    5844 host.go:66] Checking if "ha-157720-m03" exists ...
	I0819 13:15:50.171334    5844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:15:50.171379    5844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-157720-m03
	I0819 13:15:50.193727    5844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38290 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/ha-157720-m03/id_rsa Username:docker}
	I0819 13:15:50.297395    5844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:15:50.311723    5844 kubeconfig.go:125] found "ha-157720" server: "https://192.168.49.254:8443"
	I0819 13:15:50.311757    5844 api_server.go:166] Checking apiserver status ...
	I0819 13:15:50.311875    5844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:15:50.324733    5844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1381/cgroup
	I0819 13:15:50.334036    5844 api_server.go:182] apiserver freezer: "11:freezer:/docker/c3bcc877d05a6a76cb856ba710999017f68843a47fb71fb3633e5d55c5c4560f/kubepods/burstable/podea21090751c38dee83e851a189717874/9f223c1acb012d5ebe99cfffd78d3c57a4589ae7c88f01c696dec5ac30a300dc"
	I0819 13:15:50.334119    5844 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c3bcc877d05a6a76cb856ba710999017f68843a47fb71fb3633e5d55c5c4560f/kubepods/burstable/podea21090751c38dee83e851a189717874/9f223c1acb012d5ebe99cfffd78d3c57a4589ae7c88f01c696dec5ac30a300dc/freezer.state
	I0819 13:15:50.343682    5844 api_server.go:204] freezer state: "THAWED"
	I0819 13:15:50.343715    5844 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 13:15:50.351650    5844 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 13:15:50.351702    5844 status.go:422] ha-157720-m03 apiserver status = Running (err=<nil>)
	I0819 13:15:50.351714    5844 status.go:257] ha-157720-m03 status: &{Name:ha-157720-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:15:50.351740    5844 status.go:255] checking status of ha-157720-m04 ...
	I0819 13:15:50.352128    5844 cli_runner.go:164] Run: docker container inspect ha-157720-m04 --format={{.State.Status}}
	I0819 13:15:50.369270    5844 status.go:330] ha-157720-m04 host status = "Running" (err=<nil>)
	I0819 13:15:50.369295    5844 host.go:66] Checking if "ha-157720-m04" exists ...
	I0819 13:15:50.369603    5844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-157720-m04
	I0819 13:15:50.386220    5844 host.go:66] Checking if "ha-157720-m04" exists ...
	I0819 13:15:50.386726    5844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:15:50.386786    5844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-157720-m04
	I0819 13:15:50.404785    5844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38295 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/ha-157720-m04/id_rsa Username:docker}
	I0819 13:15:50.501576    5844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:15:50.521080    5844 status.go:257] ha-157720-m04 status: &{Name:ha-157720-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 node start m02 -v=7 --alsologtostderr: (17.831581392s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr: (1.020786445s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.127338341s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-157720 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-157720 -v=7 --alsologtostderr
E0819 13:16:47.324877 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-157720 -v=7 --alsologtostderr: (37.266540407s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-157720 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-157720 --wait=true -v=7 --alsologtostderr: (1m36.897738342s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-157720
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 node delete m03 -v=7 --alsologtostderr: (9.712890345s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 stop -v=7 --alsologtostderr
E0819 13:19:03.464813 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 stop -v=7 --alsologtostderr: (36.089031964s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr: exit status 7 (116.198166ms)

                                                
                                                
-- stdout --
	ha-157720
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-157720-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-157720-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:19:12.888137   20104 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:19:12.888541   20104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:19:12.888553   20104 out.go:358] Setting ErrFile to fd 2...
	I0819 13:19:12.888559   20104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:19:12.888798   20104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:19:12.888984   20104 out.go:352] Setting JSON to false
	I0819 13:19:12.889034   20104 mustload.go:65] Loading cluster: ha-157720
	I0819 13:19:12.889113   20104 notify.go:220] Checking for updates...
	I0819 13:19:12.890383   20104 config.go:182] Loaded profile config "ha-157720": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:19:12.890410   20104 status.go:255] checking status of ha-157720 ...
	I0819 13:19:12.891055   20104 cli_runner.go:164] Run: docker container inspect ha-157720 --format={{.State.Status}}
	I0819 13:19:12.907404   20104 status.go:330] ha-157720 host status = "Stopped" (err=<nil>)
	I0819 13:19:12.907429   20104 status.go:343] host is not running, skipping remaining checks
	I0819 13:19:12.907437   20104 status.go:257] ha-157720 status: &{Name:ha-157720 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:19:12.907469   20104 status.go:255] checking status of ha-157720-m02 ...
	I0819 13:19:12.907817   20104 cli_runner.go:164] Run: docker container inspect ha-157720-m02 --format={{.State.Status}}
	I0819 13:19:12.927218   20104 status.go:330] ha-157720-m02 host status = "Stopped" (err=<nil>)
	I0819 13:19:12.927245   20104 status.go:343] host is not running, skipping remaining checks
	I0819 13:19:12.927253   20104 status.go:257] ha-157720-m02 status: &{Name:ha-157720-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:19:12.927282   20104 status.go:255] checking status of ha-157720-m04 ...
	I0819 13:19:12.927567   20104 cli_runner.go:164] Run: docker container inspect ha-157720-m04 --format={{.State.Status}}
	I0819 13:19:12.948576   20104 status.go:330] ha-157720-m04 host status = "Stopped" (err=<nil>)
	I0819 13:19:12.948599   20104 status.go:343] host is not running, skipping remaining checks
	I0819 13:19:12.948607   20104 status.go:257] ha-157720-m04 status: &{Name:ha-157720-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-157720 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 13:19:31.166568 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:19:41.428849 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-157720 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.414402266s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (63.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-157720 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-157720 --control-plane -v=7 --alsologtostderr: (43.090983119s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-157720 status -v=7 --alsologtostderr: (1.231993983s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-362913 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-362913 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (49.424166019s)
--- PASS: TestJSONOutput/start/Command (49.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-362913 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.96s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-362913 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.96s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-362913 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-362913 --output=json --user=testUser: (5.834853059s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-593032 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-593032 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.254607ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"24fee7f0-6789-410b-914f-e4722c991a6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-593032] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"840b6bcc-9bce-4ac9-bef6-e02fef802705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"6088568e-5480-49db-837d-f0ff4b1edfde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d2542a38-8bf6-4d2c-8383-81b3f1030d68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig"}}
	{"specversion":"1.0","id":"b58c7ffa-9bf5-4597-ac39-56dd6f84b2e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube"}}
	{"specversion":"1.0","id":"167528c6-97f1-4472-9a81-2a5bed2b9a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7068e92e-40ef-467f-865b-5d32074c99b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"07967223-8051-4830-a289-7e4594999263","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-593032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-593032
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-544680 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-544680 --network=: (37.973760486s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-544680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-544680
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-544680: (2.056103351s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-175468 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-175468 --network=bridge: (32.28192154s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-175468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-175468
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-175468: (2.032122446s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.34s)

                                                
                                    
x
+
TestKicExistingNetwork (32.26s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-984107 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-984107 --network=existing-network: (30.055156781s)
helpers_test.go:175: Cleaning up "existing-network-984107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-984107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-984107: (2.04011401s)
--- PASS: TestKicExistingNetwork (32.26s)

                                                
                                    
x
+
TestKicCustomSubnet (38.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-373676 --subnet=192.168.60.0/24
E0819 13:24:03.465109 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-373676 --subnet=192.168.60.0/24: (36.179027025s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-373676 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-373676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-373676
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-373676: (2.140977813s)
--- PASS: TestKicCustomSubnet (38.35s)

                                                
                                    
x
+
TestKicStaticIP (34.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-627997 --static-ip=192.168.200.200
E0819 13:24:41.428019 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-627997 --static-ip=192.168.200.200: (32.196828295s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-627997 ip
helpers_test.go:175: Cleaning up "static-ip-627997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-627997
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-627997: (2.195695223s)
--- PASS: TestKicStaticIP (34.55s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-620441 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-620441 --driver=docker  --container-runtime=containerd: (29.569911251s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-623510 --driver=docker  --container-runtime=containerd
E0819 13:26:04.507949 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-623510 --driver=docker  --container-runtime=containerd: (35.229193949s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-620441
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-623510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-623510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-623510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-623510: (2.014460419s)
helpers_test.go:175: Cleaning up "first-620441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-620441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-620441: (2.262278181s)
--- PASS: TestMinikubeProfile (70.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-876506 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-876506 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.113086101s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-876506 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-889257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-889257 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.126863933s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-889257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-876506 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-876506 --alsologtostderr -v=5: (1.65529684s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-889257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-889257
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-889257: (1.203453034s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-889257
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-889257: (6.538746571s)
--- PASS: TestMountStart/serial/RestartStopped (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-889257 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-093742 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-093742 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.83731255s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-093742 -- rollout status deployment/busybox: (14.100402888s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-t9cqd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-wz9c4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-t9cqd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-wz9c4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-t9cqd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-wz9c4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-t9cqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-t9cqd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-wz9c4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-093742 -- exec busybox-7dff88458-wz9c4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-093742 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-093742 -v 3 --alsologtostderr: (17.592217986s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-093742 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp testdata/cp-test.txt multinode-093742:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4169134708/001/cp-test_multinode-093742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742:/home/docker/cp-test.txt multinode-093742-m02:/home/docker/cp-test_multinode-093742_multinode-093742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m02 "sudo cat /home/docker/cp-test_multinode-093742_multinode-093742-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742:/home/docker/cp-test.txt multinode-093742-m03:/home/docker/cp-test_multinode-093742_multinode-093742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m03 "sudo cat /home/docker/cp-test_multinode-093742_multinode-093742-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp testdata/cp-test.txt multinode-093742-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4169134708/001/cp-test_multinode-093742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742-m02:/home/docker/cp-test.txt multinode-093742:/home/docker/cp-test_multinode-093742-m02_multinode-093742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742 "sudo cat /home/docker/cp-test_multinode-093742-m02_multinode-093742.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742-m02:/home/docker/cp-test.txt multinode-093742-m03:/home/docker/cp-test_multinode-093742-m02_multinode-093742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m03 "sudo cat /home/docker/cp-test_multinode-093742-m02_multinode-093742-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp testdata/cp-test.txt multinode-093742-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4169134708/001/cp-test_multinode-093742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742-m03:/home/docker/cp-test.txt multinode-093742:/home/docker/cp-test_multinode-093742-m03_multinode-093742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742 "sudo cat /home/docker/cp-test_multinode-093742-m03_multinode-093742.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 cp multinode-093742-m03:/home/docker/cp-test.txt multinode-093742-m02:/home/docker/cp-test_multinode-093742-m03_multinode-093742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 ssh -n multinode-093742-m02 "sudo cat /home/docker/cp-test_multinode-093742-m03_multinode-093742-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-093742 node stop m03: (1.225126782s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-093742 status: exit status 7 (519.183205ms)

                                                
                                                
-- stdout --
	multinode-093742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-093742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-093742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr: exit status 7 (511.697564ms)

                                                
                                                
-- stdout --
	multinode-093742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-093742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-093742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:28:52.686336   73435 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:28:52.686557   73435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:28:52.686584   73435 out.go:358] Setting ErrFile to fd 2...
	I0819 13:28:52.686603   73435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:28:52.686880   73435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:28:52.687134   73435 out.go:352] Setting JSON to false
	I0819 13:28:52.687213   73435 mustload.go:65] Loading cluster: multinode-093742
	I0819 13:28:52.687295   73435 notify.go:220] Checking for updates...
	I0819 13:28:52.687737   73435 config.go:182] Loaded profile config "multinode-093742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:28:52.687848   73435 status.go:255] checking status of multinode-093742 ...
	I0819 13:28:52.688730   73435 cli_runner.go:164] Run: docker container inspect multinode-093742 --format={{.State.Status}}
	I0819 13:28:52.708357   73435 status.go:330] multinode-093742 host status = "Running" (err=<nil>)
	I0819 13:28:52.708386   73435 host.go:66] Checking if "multinode-093742" exists ...
	I0819 13:28:52.708709   73435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-093742
	I0819 13:28:52.732343   73435 host.go:66] Checking if "multinode-093742" exists ...
	I0819 13:28:52.732747   73435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:28:52.732802   73435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-093742
	I0819 13:28:52.749588   73435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38400 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/multinode-093742/id_rsa Username:docker}
	I0819 13:28:52.841677   73435 ssh_runner.go:195] Run: systemctl --version
	I0819 13:28:52.846469   73435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:28:52.859066   73435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:28:52.917223   73435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 13:28:52.906366592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:28:52.917812   73435 kubeconfig.go:125] found "multinode-093742" server: "https://192.168.67.2:8443"
	I0819 13:28:52.917847   73435 api_server.go:166] Checking apiserver status ...
	I0819 13:28:52.917887   73435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 13:28:52.929943   73435 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	I0819 13:28:52.939511   73435 api_server.go:182] apiserver freezer: "11:freezer:/docker/8b6675dd4d493a59ac9b9132ef8a4875e52f18f96a2e4e5393d6865e03b911df/kubepods/burstable/pod50cb7374e7c9b75a4b4c2d440de954a7/85225a527ef8b9a95ff887767a3c3abb7bc5f7bb3b6fc511214ebc1d6951cb26"
	I0819 13:28:52.939641   73435 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8b6675dd4d493a59ac9b9132ef8a4875e52f18f96a2e4e5393d6865e03b911df/kubepods/burstable/pod50cb7374e7c9b75a4b4c2d440de954a7/85225a527ef8b9a95ff887767a3c3abb7bc5f7bb3b6fc511214ebc1d6951cb26/freezer.state
	I0819 13:28:52.948872   73435 api_server.go:204] freezer state: "THAWED"
	I0819 13:28:52.948905   73435 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 13:28:52.956924   73435 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 13:28:52.956950   73435 status.go:422] multinode-093742 apiserver status = Running (err=<nil>)
	I0819 13:28:52.956962   73435 status.go:257] multinode-093742 status: &{Name:multinode-093742 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:28:52.956996   73435 status.go:255] checking status of multinode-093742-m02 ...
	I0819 13:28:52.957327   73435 cli_runner.go:164] Run: docker container inspect multinode-093742-m02 --format={{.State.Status}}
	I0819 13:28:52.977124   73435 status.go:330] multinode-093742-m02 host status = "Running" (err=<nil>)
	I0819 13:28:52.977149   73435 host.go:66] Checking if "multinode-093742-m02" exists ...
	I0819 13:28:52.977461   73435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-093742-m02
	I0819 13:28:52.994792   73435 host.go:66] Checking if "multinode-093742-m02" exists ...
	I0819 13:28:52.995115   73435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 13:28:52.995167   73435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-093742-m02
	I0819 13:28:53.021830   73435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38405 SSHKeyPath:/home/jenkins/minikube-integration/19479-4141166/.minikube/machines/multinode-093742-m02/id_rsa Username:docker}
	I0819 13:28:53.113267   73435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 13:28:53.124761   73435 status.go:257] multinode-093742-m02 status: &{Name:multinode-093742-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:28:53.124804   73435 status.go:255] checking status of multinode-093742-m03 ...
	I0819 13:28:53.125118   73435 cli_runner.go:164] Run: docker container inspect multinode-093742-m03 --format={{.State.Status}}
	I0819 13:28:53.142163   73435 status.go:330] multinode-093742-m03 host status = "Stopped" (err=<nil>)
	I0819 13:28:53.142187   73435 status.go:343] host is not running, skipping remaining checks
	I0819 13:28:53.142195   73435 status.go:257] multinode-093742-m03 status: &{Name:multinode-093742-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-093742 node start m03 -v=7 --alsologtostderr: (8.645327204s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (91.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-093742
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-093742
E0819 13:29:03.464233 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-093742: (24.982437244s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-093742 --wait=true -v=8 --alsologtostderr
E0819 13:29:41.428010 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:30:26.528412 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-093742 --wait=true -v=8 --alsologtostderr: (1m6.811778775s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-093742
--- PASS: TestMultiNode/serial/RestartKeepsNodes (91.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-093742 node delete m03: (4.838653877s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-093742 stop: (23.899969624s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-093742 status: exit status 7 (101.898325ms)

                                                
                                                
-- stdout --
	multinode-093742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-093742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr: exit status 7 (81.863352ms)

                                                
                                                
-- stdout --
	multinode-093742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-093742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:31:04.090858   81907 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:31:04.091215   81907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:31:04.091231   81907 out.go:358] Setting ErrFile to fd 2...
	I0819 13:31:04.091237   81907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:31:04.091509   81907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:31:04.091746   81907 out.go:352] Setting JSON to false
	I0819 13:31:04.091820   81907 mustload.go:65] Loading cluster: multinode-093742
	I0819 13:31:04.092249   81907 config.go:182] Loaded profile config "multinode-093742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:31:04.092269   81907 status.go:255] checking status of multinode-093742 ...
	I0819 13:31:04.092789   81907 cli_runner.go:164] Run: docker container inspect multinode-093742 --format={{.State.Status}}
	I0819 13:31:04.093059   81907 notify.go:220] Checking for updates...
	I0819 13:31:04.110413   81907 status.go:330] multinode-093742 host status = "Stopped" (err=<nil>)
	I0819 13:31:04.110435   81907 status.go:343] host is not running, skipping remaining checks
	I0819 13:31:04.110445   81907 status.go:257] multinode-093742 status: &{Name:multinode-093742 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 13:31:04.110477   81907 status.go:255] checking status of multinode-093742-m02 ...
	I0819 13:31:04.110799   81907 cli_runner.go:164] Run: docker container inspect multinode-093742-m02 --format={{.State.Status}}
	I0819 13:31:04.128209   81907 status.go:330] multinode-093742-m02 host status = "Stopped" (err=<nil>)
	I0819 13:31:04.128233   81907 status.go:343] host is not running, skipping remaining checks
	I0819 13:31:04.128241   81907 status.go:257] multinode-093742-m02 status: &{Name:multinode-093742-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-093742 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-093742 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.503092367s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-093742 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-093742
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-093742-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-093742-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.978261ms)

                                                
                                                
-- stdout --
	* [multinode-093742-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-093742-m02' is duplicated with machine name 'multinode-093742-m02' in profile 'multinode-093742'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-093742-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-093742-m03 --driver=docker  --container-runtime=containerd: (30.580399923s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-093742
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-093742: exit status 80 (359.813088ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-093742 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-093742-m03 already exists in multinode-093742-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-093742-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-093742-m03: (1.997202541s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.08s)

                                                
                                    
x
+
TestPreload (121.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-270883 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-270883 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m16.158423039s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-270883 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-270883 image pull gcr.io/k8s-minikube/busybox: (1.186779085s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-270883
E0819 13:34:03.465041 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-270883: (12.055351505s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-270883 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-270883 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (29.638233755s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-270883 image list
helpers_test.go:175: Cleaning up "test-preload-270883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-270883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-270883: (2.338859719s)
--- PASS: TestPreload (121.62s)

                                                
                                    
x
+
TestScheduledStopUnix (107.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-348042 --memory=2048 --driver=docker  --container-runtime=containerd
E0819 13:34:41.428003 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-348042 --memory=2048 --driver=docker  --container-runtime=containerd: (31.139277526s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-348042 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-348042 -n scheduled-stop-348042
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-348042 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-348042 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-348042 -n scheduled-stop-348042
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-348042
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-348042 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-348042
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-348042: exit status 7 (72.703533ms)

                                                
                                                
-- stdout --
	scheduled-stop-348042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-348042 -n scheduled-stop-348042
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-348042 -n scheduled-stop-348042: exit status 7 (63.985586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-348042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-348042
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-348042: (4.811581576s)
--- PASS: TestScheduledStopUnix (107.56s)

                                                
                                    
x
+
TestInsufficientStorage (11.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-607840 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-607840 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.799176942s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f783ff70-2ea0-4267-935c-086eb0ca40d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-607840] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ca47560-8117-4b83-8647-9bbbe96032cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"ae3dcd28-926e-4e8d-b89d-f73c1821c09c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c14fda6-313b-494b-95af-40025434ca3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig"}}
	{"specversion":"1.0","id":"492f77e3-7fb3-4e04-82eb-bce46a2b36fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube"}}
	{"specversion":"1.0","id":"81ccca99-a464-4f46-89ee-dcc61123034e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7828c4b8-c106-438c-9cdc-dc9449c4b6c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c7c9552c-7749-411c-88f9-bd5e84883f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ab5ebd8c-d9c8-4593-8874-aa4f971bcc79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"97e77414-766f-45ce-a8a0-0639fa79bc04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e390f730-f0c0-4035-a5c2-19446c82c630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d57b8d2f-a96b-4d04-94b7-ab20bbcd3d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-607840\" primary control-plane node in \"insufficient-storage-607840\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0c83727-17f2-47a7-823a-621529476514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a407ad4-c35b-4889-bd3d-40b4529bb6bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"876bae73-abcf-4209-bf97-1897b7c8b91a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-607840 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-607840 --output=json --layout=cluster: exit status 7 (293.483686ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-607840","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-607840","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:36:36.077009  100687 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-607840" does not appear in /home/jenkins/minikube-integration/19479-4141166/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-607840 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-607840 --output=json --layout=cluster: exit status 7 (301.011357ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-607840","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-607840","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 13:36:36.380481  100748 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-607840" does not appear in /home/jenkins/minikube-integration/19479-4141166/kubeconfig
	E0819 13:36:36.390617  100748 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/insufficient-storage-607840/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-607840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-607840
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-607840: (1.899140266s)
--- PASS: TestInsufficientStorage (11.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3389092823 start -p running-upgrade-972435 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0819 13:39:41.427934 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3389092823 start -p running-upgrade-972435 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.436242857s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-972435 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-972435 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.17023785s)
helpers_test.go:175: Cleaning up "running-upgrade-972435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-972435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-972435: (2.685786289s)
--- PASS: TestRunningBinaryUpgrade (92.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (102.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.316715985s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-573740
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-573740: (1.264904744s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-573740 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-573740 status --format={{.Host}}: exit status 7 (69.509593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0819 13:39:03.475071 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.919685104s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-573740 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (170.747754ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-573740] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-573740
	    minikube start -p kubernetes-upgrade-573740 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5737402 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-573740 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-573740 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.213525809s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-573740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-573740
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-573740: (2.606980942s)
--- PASS: TestKubernetesUpgrade (102.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.69s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1962566817 start -p missing-upgrade-401495 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1962566817 start -p missing-upgrade-401495 --memory=2200 --driver=docker  --container-runtime=containerd: (1m26.452378924s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-401495
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-401495: (10.337667169s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-401495
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-401495 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-401495 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.194586413s)
helpers_test.go:175: Cleaning up "missing-upgrade-401495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-401495
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-401495: (2.957857646s)
--- PASS: TestMissingContainerUpgrade (175.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608228 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-608228 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (103.319319ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-608228] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608228 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608228 --driver=docker  --container-runtime=containerd: (39.695784234s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-608228 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608228 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608228 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.71407922s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-608228 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-608228 status -o json: exit status 2 (294.961921ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-608228","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-608228
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-608228: (1.887301293s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608228 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608228 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.943858535s)
--- PASS: TestNoKubernetes/serial/Start (7.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-608228 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-608228 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.125337ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-608228
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-608228: (1.274707943s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608228 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608228 --driver=docker  --container-runtime=containerd: (7.032510446s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-608228 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-608228 "sudo systemctl is-active --quiet service kubelet": exit status 1 (437.454026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (131.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2759970453 start -p stopped-upgrade-327103 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2759970453 start -p stopped-upgrade-327103 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (53.128243201s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2759970453 -p stopped-upgrade-327103 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2759970453 -p stopped-upgrade-327103 stop: (24.478765704s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-327103 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-327103 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.373079872s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (131.98s)

                                                
                                    
x
+
TestPause/serial/Start (66.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-597045 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-597045 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m6.268423069s)
--- PASS: TestPause/serial/Start (66.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-327103
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-327103: (2.024210088s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-597045 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-597045 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.793533715s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-597045 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-597045 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-597045 --output=json --layout=cluster: exit status 2 (319.406207ms)

                                                
                                                
-- stdout --
	{"Name":"pause-597045","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-597045","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-597045 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-597045 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-597045 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-597045 --alsologtostderr -v=5: (2.594653823s)
--- PASS: TestPause/serial/DeletePaused (2.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (7.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (7.204473637s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-597045
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-597045: exit status 1 (33.151705ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-597045: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (7.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-386048 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-386048 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (277.926463ms)

                                                
                                                
-- stdout --
	* [false-386048] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 13:42:38.505933  137561 out.go:345] Setting OutFile to fd 1 ...
	I0819 13:42:38.506095  137561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:42:38.506108  137561 out.go:358] Setting ErrFile to fd 2...
	I0819 13:42:38.506114  137561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 13:42:38.506440  137561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19479-4141166/.minikube/bin
	I0819 13:42:38.507010  137561 out.go:352] Setting JSON to false
	I0819 13:42:38.509336  137561 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":98702,"bootTime":1723976256,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 13:42:38.509420  137561 start.go:139] virtualization:  
	I0819 13:42:38.516471  137561 out.go:177] * [false-386048] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 13:42:38.519660  137561 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 13:42:38.519738  137561 notify.go:220] Checking for updates...
	I0819 13:42:38.525564  137561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 13:42:38.528027  137561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19479-4141166/kubeconfig
	I0819 13:42:38.530553  137561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19479-4141166/.minikube
	I0819 13:42:38.533039  137561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 13:42:38.535997  137561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 13:42:38.539046  137561 config.go:182] Loaded profile config "force-systemd-flag-705940": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 13:42:38.539149  137561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 13:42:38.585940  137561 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 13:42:38.586333  137561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 13:42:38.689725  137561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:43 SystemTime:2024-08-19 13:42:38.668409358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 13:42:38.689841  137561 docker.go:307] overlay module found
	I0819 13:42:38.695706  137561 out.go:177] * Using the docker driver based on user configuration
	I0819 13:42:38.699074  137561 start.go:297] selected driver: docker
	I0819 13:42:38.699097  137561 start.go:901] validating driver "docker" against <nil>
	I0819 13:42:38.699111  137561 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 13:42:38.702947  137561 out.go:201] 
	W0819 13:42:38.706082  137561 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0819 13:42:38.709180  137561 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-386048 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-386048" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-386048

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386048"

                                                
                                                
----------------------- debugLogs end: false-386048 [took: 4.556050535s] --------------------------------
helpers_test.go:175: Cleaning up "false-386048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-386048
--- PASS: TestNetworkPlugins/group/false (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-914579 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 13:44:03.464469 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:44:41.428022 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-914579 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m15.200259668s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-914579 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [246089c5-f5fb-4270-9e9b-a8e3c2364a30] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [246089c5-f5fb-4270-9e9b-a8e3c2364a30] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003398467s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-914579 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-914579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-914579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033934439s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-914579 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-914579 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-914579 --alsologtostderr -v=3: (12.45160735s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-914579 -n old-k8s-version-914579
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-914579 -n old-k8s-version-914579: exit status 7 (88.821905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-914579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-895877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-895877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m20.34914446s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-895877 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56b8befc-170f-4572-83de-d8886314e789] Pending
helpers_test.go:344: "busybox" [56b8befc-170f-4572-83de-d8886314e789] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56b8befc-170f-4572-83de-d8886314e789] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004330495s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-895877 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-895877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-895877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058187155s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-895877 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-895877 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-895877 --alsologtostderr -v=3: (12.120403076s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-895877 -n no-preload-895877
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-895877 -n no-preload-895877: exit status 7 (69.141465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-895877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-895877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 13:49:03.465063 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:49:41.428953 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-895877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.488692294s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-895877 -n no-preload-895877
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-scjxq" [92d81d8d-166d-4928-b0f2-8871ddc0cfdc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004987893s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-scjxq" [92d81d8d-166d-4928-b0f2-8871ddc0cfdc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004622754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-895877 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-895877 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-895877 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-895877 --alsologtostderr -v=1: (1.099464156s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-895877 -n no-preload-895877
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-895877 -n no-preload-895877: exit status 2 (330.280298ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-895877 -n no-preload-895877
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-895877 -n no-preload-895877: exit status 2 (336.606357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-895877 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-895877 -n no-preload-895877
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-895877 -n no-preload-895877
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xrfgh" [b56480b0-fd9d-4bfd-bd97-ac55023aafda] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00483503s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-969970 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-969970 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (56.330278636s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xrfgh" [b56480b0-fd9d-4bfd-bd97-ac55023aafda] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004152667s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-914579 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-914579 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-914579 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-914579 --alsologtostderr -v=1: (1.318724172s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-914579 -n old-k8s-version-914579
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-914579 -n old-k8s-version-914579: exit status 2 (611.528771ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-914579 -n old-k8s-version-914579
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-914579 -n old-k8s-version-914579: exit status 2 (425.281054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-914579 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-914579 --alsologtostderr -v=1: (1.066385575s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-914579 -n old-k8s-version-914579
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-914579 -n old-k8s-version-914579
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-078872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 13:54:03.464964 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-078872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (59.758982229s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-969970 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0825c601-e9c6-46b5-a743-b503acfdd3eb] Pending
helpers_test.go:344: "busybox" [0825c601-e9c6-46b5-a743-b503acfdd3eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0825c601-e9c6-46b5-a743-b503acfdd3eb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004831537s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-969970 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-969970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-969970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006940979s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-969970 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-969970 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-969970 --alsologtostderr -v=3: (12.120985327s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-078872 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [634e4f38-d9d2-4477-9065-bdd05d7fbfe3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [634e4f38-d9d2-4477-9065-bdd05d7fbfe3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006504522s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-078872 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969970 -n embed-certs-969970
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969970 -n embed-certs-969970: exit status 7 (73.65386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-969970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (279.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-969970 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-969970 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m39.095973115s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969970 -n embed-certs-969970
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (279.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-078872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-078872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.433860271s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-078872 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-078872 --alsologtostderr -v=3
E0819 13:54:41.428853 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-078872 --alsologtostderr -v=3: (12.401177298s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872: exit status 7 (84.387466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-078872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-078872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 13:56:18.585792 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:18.592239 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:18.603728 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:18.625306 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:18.666786 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:18.748250 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:18.909961 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:19.231609 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:19.873089 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:21.154760 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:23.716420 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:28.837744 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:39.079028 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:56:59.560473 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:57:40.522049 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.307731 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.314219 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.325660 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.347126 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.388638 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.470338 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.632094 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:01.953868 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:02.595772 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:03.877181 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:06.438991 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:11.561219 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:21.803768 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:58:42.285304 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:59:02.443979 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 13:59:03.465033 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-078872 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (5m3.373023185s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vv9ql" [ac2605a7-0c72-4727-8d77-1be5e3ebb0b6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004766829s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vv9ql" [ac2605a7-0c72-4727-8d77-1be5e3ebb0b6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003471913s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-969970 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-969970 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-969970 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969970 -n embed-certs-969970
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969970 -n embed-certs-969970: exit status 2 (326.001275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-969970 -n embed-certs-969970
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-969970 -n embed-certs-969970: exit status 2 (319.867886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-969970 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969970 -n embed-certs-969970
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-969970 -n embed-certs-969970
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-567371 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 13:59:41.428469 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-567371 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (39.612619331s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hb6x6" [0f98b87b-60fa-4778-b7dd-73df7a4df964] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003456185s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hb6x6" [0f98b87b-60fa-4778-b7dd-73df7a4df964] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02091239s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-078872 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-078872 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-078872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-078872 --alsologtostderr -v=1: (1.385425432s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872: exit status 2 (469.074659ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872: exit status 2 (453.906946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-078872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-078872 --alsologtostderr -v=1: (1.308903229s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-078872 -n default-k8s-diff-port-078872
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-567371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-567371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.332563119s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-567371 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-567371 --alsologtostderr -v=3: (1.393768334s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-567371 -n newest-cni-567371
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-567371 -n newest-cni-567371: exit status 7 (77.327524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-567371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-567371 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-567371 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (20.280782684s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-567371 -n newest-cni-567371
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m9.390052854s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-567371 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-567371 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-567371 -n newest-cni-567371
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-567371 -n newest-cni-567371: exit status 2 (421.63497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-567371 -n newest-cni-567371
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-567371 -n newest-cni-567371: exit status 2 (430.818453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-567371 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-567371 -n newest-cni-567371
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-567371 -n newest-cni-567371
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.75s)
E0819 14:05:44.491077 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:18.586264 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.040356 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.046841 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.058547 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.080108 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.122423 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.203891 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.365556 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:20.687257 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:21.329256 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:22.611299 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:25.172752 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:30.294223 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.405666 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.412077 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.423418 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.444683 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.486095 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.567482 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:38.729392 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:39.051179 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:39.693336 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:40.535568 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/auto-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:40.975446 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:06:43.537288 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0819 14:00:45.176383 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:01:18.585307 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m2.606134895s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mxhfw" [2504ce30-9827-48cc-b275-8067ba5b01bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mxhfw" [2504ce30-9827-48cc-b275-8067ba5b01bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.00446962s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6zkcj" [9aa6300d-1866-4e0c-beb1-56f236d21b9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007058579s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rfv8v" [8e2583a6-f531-4496-9388-aa191e659e0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 14:01:46.285709 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/old-k8s-version-914579/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rfv8v" [8e2583a6-f531-4496-9388-aa191e659e0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003528905s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.351387621s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.623689563s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9ghkz" [5310264a-abb6-4e94-8edd-8e7790ae669d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-9ghkz" [5310264a-abb6-4e94-8edd-8e7790ae669d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005321906s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xwb86" [b0f80209-7888-4629-8697-3d85b63cbebc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 14:03:01.307540 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-xwb86" [b0f80209-7888-4629-8697-3d85b63cbebc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004706831s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8d88l" [4f287147-0753-4290-bde2-dfa0446d1952] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8d88l" [4f287147-0753-4290-bde2-dfa0446d1952] Running
E0819 14:03:29.018230 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/no-preload-895877/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010080548s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0819 14:03:46.532747 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m24.417881502s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0819 14:04:03.464527 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/functional-893834/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.554127 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.560580 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.571851 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.593247 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.634649 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.716045 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:22.877513 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:23.199634 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:23.840969 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:25.122903 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:27.684287 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:32.805601 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:41.428845 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
E0819 14:04:43.047048 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.392024979s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k699z" [8af50477-e1f2-48df-91f7-64bcc3c6b60b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004769442s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nbxqk" [d9f4c9f9-bd3c-41dd-8e6c-c669e8ab6af2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nbxqk" [d9f4c9f9-bd3c-41dd-8e6c-c669e8ab6af2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008821946s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bbgzl" [fe8a574f-2d0b-4290-80a5-425b64760b5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bbgzl" [fe8a574f-2d0b-4290-80a5-425b64760b5c] Running
E0819 14:05:03.528852 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/default-k8s-diff-port-078872/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00473942s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-386048 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m9.153608612s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-386048 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-386048 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hxq9b" [1536ecd6-bb74-4e5a-86e2-f048311319a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hxq9b" [1536ecd6-bb74-4e5a-86e2-f048311319a8] Running
E0819 14:06:48.659629 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/kindnet-386048/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003944173s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-386048 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-386048 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-715057 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-715057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-715057
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-269501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-269501
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-386048 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-386048" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19479-4141166/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 13:42:34 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-705940
contexts:
- context:
cluster: force-systemd-flag-705940
extensions:
- extension:
last-update: Mon, 19 Aug 2024 13:42:34 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: force-systemd-flag-705940
name: force-systemd-flag-705940
current-context: force-systemd-flag-705940
kind: Config
preferences: {}
users:
- name: force-systemd-flag-705940
user:
client-certificate: /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/force-systemd-flag-705940/client.crt
client-key: /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/force-systemd-flag-705940/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-386048

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386048"

                                                
                                                
----------------------- debugLogs end: kubenet-386048 [took: 5.49528203s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-386048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-386048
--- SKIP: TestNetworkPlugins/group/kubenet (5.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0819 13:42:44.510166 4146547 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19479-4141166/.minikube/profiles/addons-789485/client.crt: no such file or directory" logger="UnhandledError"
panic.go:626: 
----------------------- debugLogs start: cilium-386048 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-386048" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-386048

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-386048" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386048"

                                                
                                                
----------------------- debugLogs end: cilium-386048 [took: 5.578129754s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-386048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-386048
--- SKIP: TestNetworkPlugins/group/cilium (5.78s)

                                                
                                    
Copied to clipboard